00:00:00.001 Started by upstream project "autotest-per-patch" build number 132822 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.101 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.103 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.145 Fetching changes from the remote Git repository 00:00:00.148 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.182 Using shallow fetch with depth 1 00:00:00.182 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.182 > git --version # timeout=10 00:00:00.213 > git --version # 'git version 2.39.2' 00:00:00.213 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.236 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.237 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.168 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.179 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.190 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.190 > git config core.sparsecheckout # timeout=10 00:00:06.201 > git read-tree -mu HEAD # timeout=10 00:00:06.214 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.240 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.241 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.371 [Pipeline] Start of Pipeline 00:00:06.383 [Pipeline] library 00:00:06.385 Loading library shm_lib@master 00:00:06.385 Library shm_lib@master is cached. Copying from home. 00:00:06.401 [Pipeline] node 00:00:06.413 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.415 [Pipeline] { 00:00:06.425 [Pipeline] catchError 00:00:06.426 [Pipeline] { 00:00:06.439 [Pipeline] wrap 00:00:06.448 [Pipeline] { 00:00:06.455 [Pipeline] stage 00:00:06.457 [Pipeline] { (Prologue) 00:00:06.677 [Pipeline] sh 00:00:06.965 + logger -p user.info -t JENKINS-CI 00:00:06.982 [Pipeline] echo 00:00:06.983 Node: WFP4 00:00:06.990 [Pipeline] sh 00:00:07.288 [Pipeline] setCustomBuildProperty 00:00:07.299 [Pipeline] echo 00:00:07.301 Cleanup processes 00:00:07.306 [Pipeline] sh 00:00:07.594 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.594 918341 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.607 [Pipeline] sh 00:00:07.889 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.889 ++ grep -v 'sudo pgrep' 00:00:07.889 ++ awk '{print $1}' 00:00:07.889 + sudo kill -9 00:00:07.889 + true 00:00:07.899 [Pipeline] cleanWs 00:00:07.908 [WS-CLEANUP] Deleting project workspace... 00:00:07.908 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.913 [WS-CLEANUP] done 00:00:07.916 [Pipeline] setCustomBuildProperty 00:00:07.927 [Pipeline] sh 00:00:08.209 + sudo git config --global --replace-all safe.directory '*' 00:00:08.277 [Pipeline] httpRequest 00:00:09.023 [Pipeline] echo 00:00:09.024 Sorcerer 10.211.164.112 is alive 00:00:09.032 [Pipeline] retry 00:00:09.034 [Pipeline] { 00:00:09.044 [Pipeline] httpRequest 00:00:09.048 HttpMethod: GET 00:00:09.048 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.049 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.051 Response Code: HTTP/1.1 200 OK 00:00:09.051 Success: Status code 200 is in the accepted range: 200,404 00:00:09.052 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.482 [Pipeline] } 00:00:10.499 [Pipeline] // retry 00:00:10.506 [Pipeline] sh 00:00:10.790 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.806 [Pipeline] httpRequest 00:00:11.174 [Pipeline] echo 00:00:11.176 Sorcerer 10.211.164.112 is alive 00:00:11.186 [Pipeline] retry 00:00:11.188 [Pipeline] { 00:00:11.201 [Pipeline] httpRequest 00:00:11.205 HttpMethod: GET 00:00:11.205 URL: http://10.211.164.112/packages/spdk_0edc184ec47ea4c43d08e7bad766619005b09a07.tar.gz 00:00:11.206 Sending request to url: http://10.211.164.112/packages/spdk_0edc184ec47ea4c43d08e7bad766619005b09a07.tar.gz 00:00:11.230 Response Code: HTTP/1.1 200 OK 00:00:11.231 Success: Status code 200 is in the accepted range: 200,404 00:00:11.231 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0edc184ec47ea4c43d08e7bad766619005b09a07.tar.gz 00:02:54.304 [Pipeline] } 00:02:54.321 [Pipeline] // retry 00:02:54.329 [Pipeline] sh 00:02:54.613 + tar --no-same-owner -xf spdk_0edc184ec47ea4c43d08e7bad766619005b09a07.tar.gz 00:02:57.162 [Pipeline] sh 00:02:57.446 + git -C spdk log --oneline -n5 00:02:57.446 0edc184ec accel/mlx5: Support mkey registration 00:02:57.446 06358c250 bdev/nvme: use poll_group's fd_group to register interrupts 00:02:57.446 1ae735a5d nvme: add poll_group interrupt callback 00:02:57.446 f80471632 nvme: add spdk_nvme_poll_group_get_fd_group() 00:02:57.446 969b360d9 thread: fd_group-based interrupts 00:02:57.456 [Pipeline] } 00:02:57.472 [Pipeline] // stage 00:02:57.481 [Pipeline] stage 00:02:57.483 [Pipeline] { (Prepare) 00:02:57.499 [Pipeline] writeFile 00:02:57.512 [Pipeline] sh 00:02:57.795 + logger -p user.info -t JENKINS-CI 00:02:57.808 [Pipeline] sh 00:02:58.093 + logger -p user.info -t JENKINS-CI 00:02:58.105 [Pipeline] sh 00:02:58.390 + cat autorun-spdk.conf 00:02:58.390 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.390 SPDK_TEST_NVMF=1 00:02:58.390 SPDK_TEST_NVME_CLI=1 00:02:58.390 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:58.390 SPDK_TEST_NVMF_NICS=e810 00:02:58.390 SPDK_TEST_VFIOUSER=1 00:02:58.390 SPDK_RUN_UBSAN=1 00:02:58.390 NET_TYPE=phy 00:02:58.397 RUN_NIGHTLY=0 00:02:58.402 [Pipeline] readFile 00:02:58.429 [Pipeline] withEnv 00:02:58.431 [Pipeline] { 00:02:58.446 [Pipeline] sh 00:02:58.738 + set -ex 00:02:58.738 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:58.738 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:58.738 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.738 ++ SPDK_TEST_NVMF=1 00:02:58.738 ++ SPDK_TEST_NVME_CLI=1 00:02:58.738 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:58.738 ++ SPDK_TEST_NVMF_NICS=e810 00:02:58.738 ++ SPDK_TEST_VFIOUSER=1 00:02:58.738 ++ SPDK_RUN_UBSAN=1 00:02:58.738 ++ NET_TYPE=phy 00:02:58.738 ++ RUN_NIGHTLY=0 00:02:58.738 + case $SPDK_TEST_NVMF_NICS in 00:02:58.738 + DRIVERS=ice 00:02:58.738 + [[ tcp == \r\d\m\a ]] 00:02:58.738 + [[ -n ice ]] 00:02:58.738 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:58.738 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:58.738 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:58.738 rmmod: ERROR: Module i40iw is not currently loaded 00:02:58.738 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:58.738 + true 00:02:58.738 + for D in $DRIVERS 00:02:58.738 + sudo modprobe ice 00:02:58.738 + exit 0 00:02:58.747 [Pipeline] } 00:02:58.762 [Pipeline] // withEnv 00:02:58.769 [Pipeline] } 00:02:58.785 [Pipeline] // stage 00:02:58.793 [Pipeline] catchError 00:02:58.795 [Pipeline] { 00:02:58.809 [Pipeline] timeout 00:02:58.809 Timeout set to expire in 1 hr 0 min 00:02:58.811 [Pipeline] { 00:02:58.825 [Pipeline] stage 00:02:58.827 [Pipeline] { (Tests) 00:02:58.841 [Pipeline] sh 00:02:59.126 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:59.127 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:59.127 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:59.127 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:59.127 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:59.127 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:59.127 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:59.127 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:59.127 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:59.127 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:59.127 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:59.127 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:59.127 + source /etc/os-release 00:02:59.127 ++ NAME='Fedora Linux' 00:02:59.127 ++ VERSION='39 (Cloud Edition)' 00:02:59.127 ++ ID=fedora 00:02:59.127 ++ VERSION_ID=39 00:02:59.127 ++ VERSION_CODENAME= 00:02:59.127 ++ PLATFORM_ID=platform:f39 00:02:59.127 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:59.127 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:59.127 ++ LOGO=fedora-logo-icon 00:02:59.127 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:59.127 ++ HOME_URL=https://fedoraproject.org/ 00:02:59.127 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:59.127 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:59.127 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:59.127 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:59.127 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:59.127 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:59.127 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:59.127 ++ SUPPORT_END=2024-11-12 00:02:59.127 ++ VARIANT='Cloud Edition' 00:02:59.127 ++ VARIANT_ID=cloud 00:02:59.127 + uname -a 00:02:59.127 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:02:59.127 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:01.664 Hugepages 00:03:01.664 node hugesize free / total 00:03:01.664 node0 1048576kB 0 / 0 00:03:01.664 node0 2048kB 0 / 0 00:03:01.664 node1 1048576kB 0 / 0 00:03:01.664 node1 2048kB 0 / 0 00:03:01.664 00:03:01.664 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:01.664 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:01.664 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:01.664 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:01.664 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:01.664 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:01.664 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:01.664 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:01.664 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:01.664 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:01.664 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:01.664 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:01.664 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:01.664 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:01.664 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:01.664 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:01.664 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:01.664 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:01.664 + rm -f /tmp/spdk-ld-path 00:03:01.664 + source autorun-spdk.conf 00:03:01.664 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.664 ++ SPDK_TEST_NVMF=1 00:03:01.664 ++ SPDK_TEST_NVME_CLI=1 00:03:01.664 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:01.664 ++ SPDK_TEST_NVMF_NICS=e810 00:03:01.664 ++ SPDK_TEST_VFIOUSER=1 00:03:01.664 ++ SPDK_RUN_UBSAN=1 00:03:01.664 ++ NET_TYPE=phy 00:03:01.664 ++ RUN_NIGHTLY=0 00:03:01.664 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:01.664 + [[ -n '' ]] 00:03:01.664 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.664 + for M in /var/spdk/build-*-manifest.txt 00:03:01.664 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:01.664 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:01.664 + for M in /var/spdk/build-*-manifest.txt 00:03:01.664 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:01.664 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:01.664 + for M in /var/spdk/build-*-manifest.txt 00:03:01.664 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:01.664 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:01.664 ++ uname 00:03:01.664 + [[ Linux == \L\i\n\u\x ]] 00:03:01.664 + sudo dmesg -T 00:03:01.664 + sudo dmesg --clear 00:03:01.923 + dmesg_pid=919802 00:03:01.923 + [[ Fedora Linux == FreeBSD ]] 00:03:01.923 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:01.923 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:01.923 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:01.923 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:01.923 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:01.923 + [[ -x /usr/src/fio-static/fio ]] 00:03:01.923 + export FIO_BIN=/usr/src/fio-static/fio 00:03:01.923 + FIO_BIN=/usr/src/fio-static/fio 00:03:01.923 + sudo dmesg -Tw 00:03:01.923 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:01.923 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:01.923 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:01.923 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:01.923 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:01.923 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:01.923 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:01.923 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:01.923 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:01.923 05:27:49 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:01.923 05:27:49 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:01.923 05:27:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.923 05:27:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:01.923 05:27:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:01.924 05:27:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:01.924 05:27:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:01.924 05:27:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:01.924 05:27:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:01.924 05:27:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:01.924 05:27:49 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:01.924 05:27:49 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:01.924 05:27:49 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:01.924 05:27:49 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:01.924 05:27:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:01.924 05:27:49 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:01.924 05:27:49 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:01.924 05:27:49 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:01.924 05:27:49 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:01.924 05:27:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.924 05:27:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.924 05:27:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.924 05:27:49 -- paths/export.sh@5 -- $ export PATH 00:03:01.924 05:27:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.924 05:27:49 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:01.924 05:27:49 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:01.924 05:27:49 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733804869.XXXXXX 00:03:01.924 05:27:49 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733804869.qNhLBD 00:03:01.924 05:27:49 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:01.924 05:27:49 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:01.924 05:27:49 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:01.924 05:27:49 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:01.924 05:27:49 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:01.924 05:27:49 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:01.924 05:27:49 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:01.924 05:27:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.924 05:27:49 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:01.924 05:27:49 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:01.924 05:27:49 -- pm/common@17 -- $ local monitor 00:03:01.924 05:27:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.924 05:27:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.924 05:27:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.924 05:27:49 -- pm/common@21 -- $ date +%s 00:03:01.924 05:27:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.924 05:27:49 -- pm/common@21 -- $ date +%s 00:03:01.924 05:27:49 -- pm/common@25 -- $ sleep 1 00:03:01.924 05:27:49 -- pm/common@21 -- $ date +%s 00:03:01.924 05:27:49 -- pm/common@21 -- $ date +%s 00:03:01.924 05:27:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733804869 00:03:01.924 05:27:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733804869 00:03:01.924 05:27:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733804869 00:03:01.924 05:27:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733804869 00:03:01.924 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733804869_collect-vmstat.pm.log 00:03:01.924 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733804869_collect-cpu-load.pm.log 00:03:01.924 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733804869_collect-cpu-temp.pm.log 00:03:01.924 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733804869_collect-bmc-pm.bmc.pm.log 00:03:02.863 05:27:50 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:03.124 05:27:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:03.124 05:27:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:03.124 05:27:50 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:03.124 05:27:50 -- spdk/autobuild.sh@16 -- $ date -u 00:03:03.124 Tue Dec 10 04:27:50 AM UTC 2024 00:03:03.124 05:27:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:03.124 v25.01-pre-322-g0edc184ec 00:03:03.124 05:27:50 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:03.124 05:27:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:03.124 05:27:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:03.124 05:27:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:03.124 05:27:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:03.124 05:27:50 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.124 ************************************ 00:03:03.124 START TEST ubsan 00:03:03.124 ************************************ 00:03:03.124 05:27:50 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:03.124 using ubsan 00:03:03.124 00:03:03.124 real 0m0.000s 00:03:03.124 user 0m0.000s 00:03:03.124 sys 0m0.000s 00:03:03.124 05:27:50 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:03.124 05:27:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:03.124 ************************************ 00:03:03.124 END TEST ubsan 00:03:03.124 ************************************ 00:03:03.124 05:27:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:03.124 05:27:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:03.124 05:27:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:03.124 05:27:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:03.124 05:27:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:03.124 05:27:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:03.124 05:27:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:03.124 05:27:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:03.124 05:27:50 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:03.124 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:03.124 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:03.692 Using 'verbs' RDMA provider 00:03:16.475 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:28.715 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:28.715 Creating mk/config.mk...done. 00:03:28.715 Creating mk/cc.flags.mk...done. 00:03:28.715 Type 'make' to build. 00:03:28.715 05:28:16 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:28.715 05:28:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:28.715 05:28:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:28.715 05:28:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:28.715 ************************************ 00:03:28.715 START TEST make 00:03:28.715 ************************************ 00:03:28.715 05:28:16 make -- common/autotest_common.sh@1129 -- $ make -j96 00:03:28.975 make[1]: Nothing to be done for 'all'. 00:03:30.369 The Meson build system 00:03:30.369 Version: 1.5.0 00:03:30.369 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:30.369 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:30.369 Build type: native build 00:03:30.369 Project name: libvfio-user 00:03:30.369 Project version: 0.0.1 00:03:30.369 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:30.369 C linker for the host machine: cc ld.bfd 2.40-14 00:03:30.369 Host machine cpu family: x86_64 00:03:30.369 Host machine cpu: x86_64 00:03:30.369 Run-time dependency threads found: YES 00:03:30.369 Library dl found: YES 00:03:30.369 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:30.370 Run-time dependency json-c found: YES 0.17 00:03:30.370 Run-time dependency cmocka found: YES 1.1.7 00:03:30.370 Program pytest-3 found: NO 00:03:30.370 Program flake8 found: NO 00:03:30.370 Program misspell-fixer found: NO 00:03:30.370 Program restructuredtext-lint found: NO 00:03:30.370 Program valgrind found: YES (/usr/bin/valgrind) 00:03:30.370 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:30.370 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:30.370 Compiler for C supports arguments -Wwrite-strings: YES 00:03:30.370 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:30.370 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:30.370 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:30.370 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:30.370 Build targets in project: 8 00:03:30.370 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:30.370 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:30.370 00:03:30.370 libvfio-user 0.0.1 00:03:30.370 00:03:30.370 User defined options 00:03:30.370 buildtype : debug 00:03:30.370 default_library: shared 00:03:30.370 libdir : /usr/local/lib 00:03:30.370 00:03:30.370 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:30.938 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:31.197 [1/37] Compiling C object samples/null.p/null.c.o 00:03:31.197 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:31.197 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:31.197 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:31.197 [5/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:31.197 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:31.197 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:31.197 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:31.197 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:31.197 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:31.197 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:31.197 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:31.197 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:31.197 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:31.197 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:31.197 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:31.197 [17/37] Compiling C object samples/server.p/server.c.o 00:03:31.197 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:31.197 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:31.197 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:31.197 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:31.197 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:31.197 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:31.197 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:31.197 [25/37] Compiling C object samples/client.p/client.c.o 00:03:31.197 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:31.197 [27/37] Linking target samples/client 00:03:31.197 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:31.197 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:31.456 [30/37] Linking target test/unit_tests 00:03:31.456 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:31.456 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:31.456 [33/37] Linking target samples/null 00:03:31.456 [34/37] Linking target samples/lspci 00:03:31.456 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:31.456 [36/37] Linking target samples/server 00:03:31.456 [37/37] Linking target samples/gpio-pci-idio-16 00:03:31.456 INFO: autodetecting backend as ninja 00:03:31.456 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:31.715 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:31.974 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:31.974 ninja: no work to do. 00:03:37.250 The Meson build system 00:03:37.250 Version: 1.5.0 00:03:37.250 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:37.250 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:37.250 Build type: native build 00:03:37.250 Program cat found: YES (/usr/bin/cat) 00:03:37.250 Project name: DPDK 00:03:37.250 Project version: 24.03.0 00:03:37.250 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:37.250 C linker for the host machine: cc ld.bfd 2.40-14 00:03:37.250 Host machine cpu family: x86_64 00:03:37.250 Host machine cpu: x86_64 00:03:37.250 Message: ## Building in Developer Mode ## 00:03:37.250 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:37.250 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:37.250 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:37.250 Program python3 found: YES (/usr/bin/python3) 00:03:37.250 Program cat found: YES (/usr/bin/cat) 00:03:37.250 Compiler for C supports arguments -march=native: YES 00:03:37.250 Checking for size of "void *" : 8 00:03:37.250 Checking for size of "void *" : 8 (cached) 00:03:37.250 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:37.250 Library m found: YES 00:03:37.250 Library numa found: YES 00:03:37.250 Has header "numaif.h" : YES 00:03:37.250 Library fdt found: NO 00:03:37.250 Library execinfo found: NO 00:03:37.250 Has header "execinfo.h" : YES 00:03:37.250 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:37.251 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:37.251 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:37.251 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:37.251 Run-time dependency openssl found: YES 3.1.1 00:03:37.251 Run-time dependency libpcap found: YES 1.10.4 00:03:37.251 Has header "pcap.h" with dependency libpcap: YES 00:03:37.251 Compiler for C supports arguments -Wcast-qual: YES 00:03:37.251 Compiler for C supports arguments -Wdeprecated: YES 00:03:37.251 Compiler for C supports arguments -Wformat: YES 00:03:37.251 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:37.251 Compiler for C supports arguments -Wformat-security: NO 00:03:37.251 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:37.251 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:37.251 Compiler for C supports arguments -Wnested-externs: YES 00:03:37.251 Compiler for C supports arguments -Wold-style-definition: YES 00:03:37.251 Compiler for C supports arguments -Wpointer-arith: YES 00:03:37.251 Compiler for C supports arguments -Wsign-compare: YES 00:03:37.251 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:37.251 Compiler for C supports arguments -Wundef: YES 00:03:37.251 Compiler for C supports arguments -Wwrite-strings: YES 00:03:37.251 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:37.251 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:37.251 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:37.251 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:37.251 Program objdump found: YES (/usr/bin/objdump) 00:03:37.251 Compiler for C supports arguments -mavx512f: YES 00:03:37.251 Checking if "AVX512 checking" compiles: YES 00:03:37.251 Fetching value of define "__SSE4_2__" : 1 00:03:37.251 Fetching value of define "__AES__" : 1 00:03:37.251 Fetching value of define "__AVX__" : 1 00:03:37.251 Fetching value of define "__AVX2__" : 1 00:03:37.251 Fetching value of define "__AVX512BW__" : 1 00:03:37.251 Fetching value of define "__AVX512CD__" : 1 00:03:37.251 Fetching value of define "__AVX512DQ__" : 1 00:03:37.251 Fetching value of define "__AVX512F__" : 1 00:03:37.251 Fetching value of define "__AVX512VL__" : 1 00:03:37.251 Fetching value of define "__PCLMUL__" : 1 00:03:37.251 Fetching value of define "__RDRND__" : 1 00:03:37.251 Fetching value of define "__RDSEED__" : 1 00:03:37.251 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:37.251 Fetching value of define "__znver1__" : (undefined) 00:03:37.251 Fetching value of define "__znver2__" : (undefined) 00:03:37.251 Fetching value of define "__znver3__" : (undefined) 00:03:37.251 Fetching value of define "__znver4__" : (undefined) 00:03:37.251 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:37.251 Message: lib/log: Defining dependency "log" 00:03:37.251 Message: lib/kvargs: Defining dependency "kvargs" 00:03:37.251 Message: lib/telemetry: Defining dependency "telemetry" 00:03:37.251 Checking for function "getentropy" : NO 00:03:37.251 Message: lib/eal: Defining dependency "eal" 00:03:37.251 Message: lib/ring: Defining dependency "ring" 00:03:37.251 Message: lib/rcu: Defining dependency "rcu" 00:03:37.251 Message: lib/mempool: Defining dependency "mempool" 00:03:37.251 Message: lib/mbuf: Defining dependency "mbuf" 00:03:37.251 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:37.251 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:37.251 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:37.251 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:37.251 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:37.251 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:37.251 Compiler for C supports arguments -mpclmul: YES 00:03:37.251 Compiler for C supports arguments -maes: YES 00:03:37.251 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:37.251 Compiler for C supports arguments -mavx512bw: YES 00:03:37.251 Compiler for C supports arguments -mavx512dq: YES 00:03:37.251 Compiler for C supports arguments -mavx512vl: YES 00:03:37.251 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:37.251 Compiler for C supports arguments -mavx2: YES 00:03:37.251 Compiler for C supports arguments -mavx: YES 00:03:37.251 Message: lib/net: Defining dependency "net" 00:03:37.251 Message: lib/meter: Defining dependency "meter" 00:03:37.251 Message: lib/ethdev: Defining dependency "ethdev" 00:03:37.251 Message: lib/pci: Defining dependency "pci" 00:03:37.251 Message: lib/cmdline: Defining dependency "cmdline" 00:03:37.251 Message: lib/hash: Defining dependency "hash" 00:03:37.251 Message: lib/timer: Defining dependency "timer" 00:03:37.251 Message: lib/compressdev: Defining dependency "compressdev" 00:03:37.251 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:37.251 Message: lib/dmadev: Defining dependency "dmadev" 00:03:37.251 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:37.251 Message: lib/power: Defining dependency "power" 00:03:37.251 Message: lib/reorder: Defining dependency "reorder" 00:03:37.251 Message: lib/security: Defining dependency "security" 00:03:37.251 Has header "linux/userfaultfd.h" : YES 00:03:37.251 Has header "linux/vduse.h" : YES 00:03:37.251 Message: lib/vhost: Defining dependency "vhost" 00:03:37.251 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:37.251 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:37.251 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:37.251 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:37.251 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:37.251 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:37.251 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:37.251 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:37.251 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:37.251 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:37.251 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:37.251 Configuring doxy-api-html.conf using configuration 00:03:37.251 Configuring doxy-api-man.conf using configuration 00:03:37.251 Program mandb found: YES (/usr/bin/mandb) 00:03:37.251 Program sphinx-build found: NO 00:03:37.251 Configuring rte_build_config.h using configuration 00:03:37.251 Message: 00:03:37.251 ================= 00:03:37.251 Applications Enabled 00:03:37.251 ================= 00:03:37.251 00:03:37.251 apps: 00:03:37.251 00:03:37.251 00:03:37.251 Message: 00:03:37.251 ================= 00:03:37.251 Libraries Enabled 00:03:37.251 ================= 00:03:37.251 00:03:37.251 libs: 00:03:37.251 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:37.251 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:37.251 cryptodev, dmadev, power, reorder, security, vhost, 00:03:37.251 00:03:37.251 Message: 00:03:37.251 =============== 00:03:37.251 Drivers Enabled 00:03:37.251 =============== 00:03:37.251 00:03:37.251 common: 00:03:37.251 00:03:37.251 bus: 00:03:37.251 pci, vdev, 00:03:37.251 mempool: 00:03:37.251 ring, 00:03:37.251 dma: 00:03:37.251 00:03:37.251 net: 00:03:37.251 00:03:37.251 crypto: 00:03:37.251 00:03:37.251 compress: 00:03:37.251 00:03:37.251 vdpa: 00:03:37.251 00:03:37.251 00:03:37.251 Message: 00:03:37.251 ================= 00:03:37.251 Content Skipped 00:03:37.251 ================= 00:03:37.251 00:03:37.251 apps: 00:03:37.251 dumpcap: explicitly disabled via build config 00:03:37.251 graph: explicitly disabled via build config 00:03:37.251 pdump: explicitly disabled via build config 00:03:37.251 proc-info: explicitly disabled via build config 00:03:37.251 test-acl: explicitly disabled via build config 00:03:37.251 test-bbdev: explicitly disabled via build config 00:03:37.251 test-cmdline: explicitly disabled via build config 00:03:37.251 test-compress-perf: explicitly disabled via build config 00:03:37.251 test-crypto-perf: explicitly disabled via build config 00:03:37.251 test-dma-perf: explicitly disabled via build config 00:03:37.251 test-eventdev: explicitly disabled via build config 00:03:37.251 test-fib: explicitly disabled via build config 00:03:37.251 test-flow-perf: explicitly disabled via build config 00:03:37.251 test-gpudev: explicitly disabled via build config 00:03:37.251 test-mldev: explicitly disabled via build config 00:03:37.251 test-pipeline: explicitly disabled via build config 00:03:37.251 test-pmd: explicitly disabled via build config 00:03:37.251 test-regex: explicitly disabled via build config 00:03:37.251 test-sad: explicitly disabled via build config 00:03:37.251 test-security-perf: explicitly disabled via build config 00:03:37.251 00:03:37.251 libs: 00:03:37.251 argparse: explicitly disabled via build config 00:03:37.251 metrics: explicitly disabled via build config 00:03:37.251 acl: explicitly disabled via build config 00:03:37.251 bbdev: explicitly disabled via build config 00:03:37.251 bitratestats: explicitly disabled via build config 00:03:37.251 bpf: explicitly disabled via build config 00:03:37.251 cfgfile: explicitly disabled via build config 00:03:37.251 distributor: explicitly disabled via build config 00:03:37.251 efd: explicitly disabled via build config 00:03:37.251 eventdev: explicitly disabled via build config 00:03:37.251 dispatcher: explicitly disabled via build config 00:03:37.251 gpudev: explicitly disabled via build config 00:03:37.251 gro: explicitly disabled via build config 00:03:37.251 gso: explicitly disabled via build config 00:03:37.251 ip_frag: explicitly disabled via build config 00:03:37.251 jobstats: explicitly disabled via build config 00:03:37.251 latencystats: explicitly disabled via build config 00:03:37.251 lpm: explicitly disabled via build config 00:03:37.251 member: explicitly disabled via build config 00:03:37.251 pcapng: explicitly disabled via build config 00:03:37.251 rawdev: explicitly disabled via build config 00:03:37.251 regexdev: explicitly disabled via build config 00:03:37.251 mldev: explicitly disabled via build config 00:03:37.251 rib: explicitly disabled via build config 00:03:37.251 sched: explicitly disabled via build config 00:03:37.251 stack: explicitly disabled via build config 00:03:37.251 ipsec: explicitly disabled via build config 00:03:37.251 pdcp: explicitly disabled via build config 00:03:37.251 fib: explicitly disabled via build config 00:03:37.251 port: explicitly disabled via build config 00:03:37.251 pdump: explicitly disabled via build config 00:03:37.252 table: explicitly disabled via build config 00:03:37.252 pipeline: explicitly disabled via build config 00:03:37.252 graph: explicitly disabled via build config 00:03:37.252 node: explicitly disabled via build config 00:03:37.252 00:03:37.252 drivers: 00:03:37.252 common/cpt: not in enabled drivers build config 00:03:37.252 common/dpaax: not in enabled drivers build config 00:03:37.252 common/iavf: not in enabled drivers build config 00:03:37.252 common/idpf: not in enabled drivers build config 00:03:37.252 common/ionic: not in enabled drivers build config 00:03:37.252 common/mvep: not in enabled drivers build config 00:03:37.252 common/octeontx: not in enabled drivers build config 00:03:37.252 bus/auxiliary: not in enabled drivers build config 00:03:37.252 bus/cdx: not in enabled drivers build config 00:03:37.252 bus/dpaa: not in enabled drivers build config 00:03:37.252 bus/fslmc: not in enabled drivers build config 00:03:37.252 bus/ifpga: not in enabled drivers build config 00:03:37.252 bus/platform: not in enabled drivers build config 00:03:37.252 bus/uacce: not in enabled drivers build config 00:03:37.252 bus/vmbus: not in enabled drivers build config 00:03:37.252 common/cnxk: not in enabled drivers build config 00:03:37.252 common/mlx5: not in enabled drivers build config 00:03:37.252 common/nfp: not in enabled drivers build config 00:03:37.252 common/nitrox: not in enabled drivers build config 00:03:37.252 common/qat: not in enabled drivers build config 00:03:37.252 common/sfc_efx: not in enabled drivers build config 00:03:37.252 mempool/bucket: not in enabled drivers build config 00:03:37.252 mempool/cnxk: not in enabled drivers build config 00:03:37.252 mempool/dpaa: not in enabled drivers build config 00:03:37.252 mempool/dpaa2: not in enabled drivers build config 00:03:37.252 mempool/octeontx: not in enabled drivers build config 00:03:37.252 mempool/stack: not in enabled drivers build config 00:03:37.252 dma/cnxk: not in enabled drivers build config 00:03:37.252 dma/dpaa: not in enabled drivers build config 00:03:37.252 dma/dpaa2: not in enabled drivers build config 00:03:37.252 dma/hisilicon: not in enabled drivers build config 00:03:37.252 dma/idxd: not in enabled drivers build config 00:03:37.252 dma/ioat: not in enabled drivers build config 00:03:37.252 dma/skeleton: not in enabled drivers build config 00:03:37.252 net/af_packet: not in enabled drivers build config 00:03:37.252 net/af_xdp: not in enabled drivers build config 00:03:37.252 net/ark: not in enabled drivers build config 00:03:37.252 net/atlantic: not in enabled drivers build config 00:03:37.252 net/avp: not in enabled drivers build config 00:03:37.252 net/axgbe: not in enabled drivers build config 00:03:37.252 net/bnx2x: not in enabled drivers build config 00:03:37.252 net/bnxt: not in enabled drivers build config 00:03:37.252 net/bonding: not in enabled drivers build config 00:03:37.252 net/cnxk: not in enabled drivers build config 00:03:37.252 net/cpfl: not in enabled drivers build config 00:03:37.252 net/cxgbe: not in enabled drivers build config 00:03:37.252 net/dpaa: not in enabled drivers build config 00:03:37.252 net/dpaa2: not in enabled drivers build config 00:03:37.252 net/e1000: not in enabled drivers build config 00:03:37.252 net/ena: not in enabled drivers build config 00:03:37.252 net/enetc: not in enabled drivers build config 00:03:37.252 net/enetfec: not in enabled drivers build config 00:03:37.252 net/enic: not in enabled drivers build config 00:03:37.252 net/failsafe: not in enabled drivers build config 00:03:37.252 net/fm10k: not in enabled drivers build config 00:03:37.252 net/gve: not in enabled drivers build config 00:03:37.252 net/hinic: not in enabled drivers build config 00:03:37.252 net/hns3: not in enabled drivers build config 00:03:37.252 net/i40e: not in enabled drivers build config 00:03:37.252 net/iavf: not in enabled drivers build config 00:03:37.252 net/ice: not in enabled drivers build config 00:03:37.252 net/idpf: not in enabled drivers build config 00:03:37.252 net/igc: not in enabled drivers build config 00:03:37.252 net/ionic: not in enabled drivers build config 00:03:37.252 net/ipn3ke: not in enabled drivers build config 00:03:37.252 net/ixgbe: not in enabled drivers build config 00:03:37.252 net/mana: not in enabled drivers build config 00:03:37.252 net/memif: not in enabled drivers build config 00:03:37.252 net/mlx4: not in enabled drivers build config 00:03:37.252 net/mlx5: not in enabled drivers build config 00:03:37.252 net/mvneta: not in enabled drivers build config 00:03:37.252 net/mvpp2: not in enabled drivers build config 00:03:37.252 net/netvsc: not in enabled drivers build config 00:03:37.252 net/nfb: not in enabled drivers build config 00:03:37.252 net/nfp: not in enabled drivers build config 00:03:37.252 net/ngbe: not in enabled drivers build config 00:03:37.252 net/null: not in enabled drivers build config 00:03:37.252 net/octeontx: not in enabled drivers build config 00:03:37.252 net/octeon_ep: not in enabled drivers build config 00:03:37.252 net/pcap: not in enabled drivers build config 00:03:37.252 net/pfe: not in enabled drivers build config 00:03:37.252 net/qede: not in enabled drivers build config 00:03:37.252 net/ring: not in enabled drivers build config 00:03:37.252 net/sfc: not in enabled drivers build config 00:03:37.252 net/softnic: not in enabled drivers build config 00:03:37.252 net/tap: not in enabled drivers build config 00:03:37.252 net/thunderx: not in enabled drivers build config 00:03:37.252 net/txgbe: not in enabled drivers build config 00:03:37.252 net/vdev_netvsc: not in enabled drivers build config 00:03:37.252 net/vhost: not in enabled drivers build config 00:03:37.252 net/virtio: not in enabled drivers build config 00:03:37.252 net/vmxnet3: not in enabled drivers build config 00:03:37.252 raw/*: missing internal dependency, "rawdev" 00:03:37.252 crypto/armv8: not in enabled drivers build config 00:03:37.252 crypto/bcmfs: not in enabled drivers build config 00:03:37.252 crypto/caam_jr: not in enabled drivers build config 00:03:37.252 crypto/ccp: not in enabled drivers build config 00:03:37.252 crypto/cnxk: not in enabled drivers build config 00:03:37.252 crypto/dpaa_sec: not in enabled drivers build config 00:03:37.252 crypto/dpaa2_sec: not in enabled drivers build config 00:03:37.252 crypto/ipsec_mb: not in enabled drivers build config 00:03:37.252 crypto/mlx5: not in enabled drivers build config 00:03:37.252 crypto/mvsam: not in enabled drivers build config 00:03:37.252 crypto/nitrox: not in enabled drivers build config 00:03:37.252 crypto/null: not in enabled drivers build config 00:03:37.252 crypto/octeontx: not in enabled drivers build config 00:03:37.252 crypto/openssl: not in enabled drivers build config 00:03:37.252 crypto/scheduler: not in enabled drivers build config 00:03:37.252 crypto/uadk: not in enabled drivers build config 00:03:37.252 crypto/virtio: not in enabled drivers build config 00:03:37.252 compress/isal: not in enabled drivers build config 00:03:37.252 compress/mlx5: not in enabled drivers build config 00:03:37.252 compress/nitrox: not in enabled drivers build config 00:03:37.252 compress/octeontx: not in enabled drivers build config 00:03:37.252 compress/zlib: not in enabled drivers build config 00:03:37.252 regex/*: missing internal dependency, "regexdev" 00:03:37.252 ml/*: missing internal dependency, "mldev" 00:03:37.252 vdpa/ifc: not in enabled drivers build config 00:03:37.252 vdpa/mlx5: not in enabled drivers build config 00:03:37.252 vdpa/nfp: not in enabled drivers build config 00:03:37.252 vdpa/sfc: not in enabled drivers build config 00:03:37.252 event/*: missing internal dependency, "eventdev" 00:03:37.252 baseband/*: missing internal dependency, "bbdev" 00:03:37.252 gpu/*: missing internal dependency, "gpudev" 00:03:37.252 00:03:37.252 00:03:37.252 Build targets in project: 85 00:03:37.252 00:03:37.252 DPDK 24.03.0 00:03:37.252 00:03:37.252 User defined options 00:03:37.252 buildtype : debug 00:03:37.252 default_library : shared 00:03:37.252 libdir : lib 00:03:37.252 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:37.252 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:37.252 c_link_args : 00:03:37.252 cpu_instruction_set: native 00:03:37.252 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:37.252 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:37.252 enable_docs : false 00:03:37.252 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:37.252 enable_kmods : false 00:03:37.252 max_lcores : 128 00:03:37.252 tests : false 00:03:37.252 00:03:37.252 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:37.524 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:37.524 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:37.524 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:37.785 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:37.786 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:37.786 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:37.786 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:37.786 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:37.786 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:37.786 [9/268] Linking static target lib/librte_kvargs.a 00:03:37.786 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:37.786 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:37.786 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:37.786 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:37.786 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:37.786 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:37.786 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:37.786 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:37.786 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:37.786 [19/268] Linking static target lib/librte_log.a 00:03:37.786 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:37.786 [21/268] Linking static target lib/librte_pci.a 00:03:37.786 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:38.050 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:38.050 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:38.050 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:38.050 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:38.050 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:38.050 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:38.050 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:38.050 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:38.311 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:38.311 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:38.311 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:38.311 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:38.311 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:38.311 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:38.311 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:38.311 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:38.311 [39/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:38.311 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:38.311 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:38.311 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:38.311 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:38.311 [44/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:38.311 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:38.311 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:38.311 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:38.311 [48/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:38.311 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:38.311 [50/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:38.311 [51/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:38.311 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:38.311 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:38.311 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:38.311 [55/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:38.311 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:38.311 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:38.311 [58/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:38.311 [59/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:38.311 [60/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:38.311 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:38.311 [62/268] Linking static target lib/librte_meter.a 00:03:38.311 [63/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:38.311 [64/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:38.311 [65/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:38.311 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:38.311 [67/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:38.311 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:38.311 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:38.311 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:38.311 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:38.311 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:38.311 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:38.311 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:38.311 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:38.311 [76/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:38.311 [77/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:38.311 [78/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.311 [79/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:38.311 [80/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:38.311 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:38.311 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:38.311 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:38.311 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:38.311 [85/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:38.311 [86/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:38.311 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:38.311 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:38.311 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:38.311 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:38.311 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:38.311 [92/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:38.311 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:38.311 [94/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:38.311 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:38.311 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:38.311 [97/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:38.311 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:38.311 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:38.311 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:38.311 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:38.311 [102/268] Linking static target lib/librte_ring.a 00:03:38.311 [103/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.311 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:38.311 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:38.311 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:38.311 [107/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:38.311 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:38.311 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:38.570 [110/268] Linking static target lib/librte_rcu.a 00:03:38.570 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:38.570 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:38.570 [113/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:38.570 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:38.570 [115/268] Linking static target lib/librte_cmdline.a 00:03:38.570 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:38.570 [117/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:38.570 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:38.570 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:38.570 [120/268] Linking static target lib/librte_telemetry.a 00:03:38.570 [121/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:38.570 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:38.570 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:38.570 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:38.570 [125/268] Linking static target lib/librte_eal.a 00:03:38.570 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:38.570 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:38.570 [128/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:38.570 [129/268] Linking static target lib/librte_net.a 00:03:38.570 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:38.570 [131/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:38.570 [132/268] Linking static target lib/librte_mempool.a 00:03:38.570 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:38.570 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.570 [135/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:38.570 [136/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:38.570 [137/268] Linking static target lib/librte_mbuf.a 00:03:38.570 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:38.570 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:38.570 [140/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:38.570 [141/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.570 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:38.828 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:38.828 [144/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.828 [145/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.828 [146/268] Linking target lib/librte_log.so.24.1 00:03:38.828 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:38.828 [148/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:38.828 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:38.828 [150/268] Linking static target lib/librte_timer.a 00:03:38.828 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:38.828 [152/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:38.828 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:38.828 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:38.828 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:38.828 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:38.828 [157/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:38.828 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:38.828 [159/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:38.828 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:38.828 [161/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:38.828 [162/268] Linking static target lib/librte_dmadev.a 00:03:38.828 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:38.828 [164/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:38.828 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:38.828 [166/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:38.828 [167/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:38.828 [168/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.828 [169/268] Linking static target lib/librte_compressdev.a 00:03:38.828 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:38.828 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:38.828 [172/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:38.828 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:38.828 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:38.828 [175/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:38.828 [176/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:38.828 [177/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:38.828 [178/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.828 [179/268] Linking target lib/librte_kvargs.so.24.1 00:03:38.828 [180/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:38.828 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:38.828 [182/268] Linking static target lib/librte_security.a 00:03:38.828 [183/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:38.828 [184/268] Linking target lib/librte_telemetry.so.24.1 00:03:38.828 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:38.828 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:38.828 [187/268] Linking static target lib/librte_reorder.a 00:03:38.828 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:38.828 [189/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:39.087 [190/268] Linking static target lib/librte_power.a 00:03:39.087 [191/268] Linking static target lib/librte_hash.a 00:03:39.087 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:39.087 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:39.087 [194/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:39.087 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:39.087 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:39.087 [197/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:39.087 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:39.087 [199/268] Linking static target drivers/librte_bus_vdev.a 00:03:39.087 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:39.087 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:39.087 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:39.087 [203/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:39.087 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:39.087 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:39.087 [206/268] Linking static target drivers/librte_mempool_ring.a 00:03:39.087 [207/268] Linking static target lib/librte_cryptodev.a 00:03:39.087 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:39.087 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:39.087 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:39.087 [211/268] Linking static target drivers/librte_bus_pci.a 00:03:39.087 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.087 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:39.344 [214/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.344 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.344 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.344 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.603 [218/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.603 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.603 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:39.603 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.603 [222/268] Linking static target lib/librte_ethdev.a 00:03:39.603 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.861 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:39.861 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.861 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.861 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.794 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:40.794 [229/268] Linking static target lib/librte_vhost.a 00:03:41.052 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.954 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.223 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.483 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.483 [234/268] Linking target lib/librte_eal.so.24.1 00:03:48.742 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:48.742 [236/268] Linking target lib/librte_pci.so.24.1 00:03:48.742 [237/268] Linking target lib/librte_ring.so.24.1 00:03:48.742 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:48.742 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:48.742 [240/268] Linking target lib/librte_meter.so.24.1 00:03:48.742 [241/268] Linking target lib/librte_timer.so.24.1 00:03:49.001 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:49.001 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:49.001 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:49.001 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:49.001 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:49.001 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:49.001 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:49.001 [249/268] Linking target lib/librte_rcu.so.24.1 00:03:49.001 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:49.001 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:49.259 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:49.259 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:49.259 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:49.259 [255/268] Linking target lib/librte_net.so.24.1 00:03:49.259 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:49.259 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:49.259 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:49.518 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:49.518 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:49.518 [261/268] Linking target lib/librte_hash.so.24.1 00:03:49.518 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:49.518 [263/268] Linking target lib/librte_security.so.24.1 00:03:49.518 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:49.518 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:49.777 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:49.777 [267/268] Linking target lib/librte_power.so.24.1 00:03:49.777 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:49.777 INFO: autodetecting backend as ninja 00:03:49.777 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:59.760 CC lib/log/log.o 00:03:59.760 CC lib/log/log_flags.o 00:03:59.760 CC lib/log/log_deprecated.o 00:03:59.760 CC lib/ut_mock/mock.o 00:03:59.760 CC lib/ut/ut.o 00:03:59.760 LIB libspdk_ut_mock.a 00:03:59.760 LIB libspdk_ut.a 00:03:59.760 LIB libspdk_log.a 00:03:59.760 SO libspdk_ut_mock.so.6.0 00:03:59.760 SO libspdk_ut.so.2.0 00:03:59.760 SO libspdk_log.so.7.1 00:03:59.760 SYMLINK libspdk_ut_mock.so 00:03:59.760 SYMLINK libspdk_ut.so 00:03:59.760 SYMLINK libspdk_log.so 00:04:00.019 CC lib/ioat/ioat.o 00:04:00.019 CXX lib/trace_parser/trace.o 00:04:00.019 CC lib/util/base64.o 00:04:00.019 CC lib/dma/dma.o 00:04:00.019 CC lib/util/bit_array.o 00:04:00.019 CC lib/util/cpuset.o 00:04:00.019 CC lib/util/crc16.o 00:04:00.019 CC lib/util/crc32.o 00:04:00.019 CC lib/util/crc32c.o 00:04:00.019 CC lib/util/crc32_ieee.o 00:04:00.019 CC lib/util/crc64.o 00:04:00.019 CC lib/util/dif.o 00:04:00.019 CC lib/util/fd.o 00:04:00.019 CC lib/util/fd_group.o 00:04:00.019 CC lib/util/file.o 00:04:00.019 CC lib/util/hexlify.o 00:04:00.019 CC lib/util/iov.o 00:04:00.019 CC lib/util/math.o 00:04:00.019 CC lib/util/net.o 00:04:00.019 CC lib/util/pipe.o 00:04:00.019 CC lib/util/strerror_tls.o 00:04:00.019 CC lib/util/string.o 00:04:00.019 CC lib/util/uuid.o 00:04:00.019 CC lib/util/xor.o 00:04:00.019 CC lib/util/zipf.o 00:04:00.019 CC lib/util/md5.o 00:04:00.278 CC lib/vfio_user/host/vfio_user_pci.o 00:04:00.278 CC lib/vfio_user/host/vfio_user.o 00:04:00.278 LIB libspdk_dma.a 00:04:00.278 SO libspdk_dma.so.5.0 00:04:00.278 LIB libspdk_ioat.a 00:04:00.536 SYMLINK libspdk_dma.so 00:04:00.536 SO libspdk_ioat.so.7.0 00:04:00.536 SYMLINK libspdk_ioat.so 00:04:00.536 LIB libspdk_vfio_user.a 00:04:00.536 SO libspdk_vfio_user.so.5.0 00:04:00.536 SYMLINK libspdk_vfio_user.so 00:04:00.536 LIB libspdk_util.a 00:04:00.796 SO libspdk_util.so.10.1 00:04:00.796 SYMLINK libspdk_util.so 00:04:00.796 LIB libspdk_trace_parser.a 00:04:00.796 SO libspdk_trace_parser.so.6.0 00:04:01.055 SYMLINK libspdk_trace_parser.so 00:04:01.055 CC lib/json/json_parse.o 00:04:01.055 CC lib/json/json_util.o 00:04:01.055 CC lib/json/json_write.o 00:04:01.055 CC lib/env_dpdk/env.o 00:04:01.055 CC lib/env_dpdk/memory.o 00:04:01.055 CC lib/env_dpdk/pci.o 00:04:01.055 CC lib/env_dpdk/init.o 00:04:01.055 CC lib/env_dpdk/threads.o 00:04:01.055 CC lib/idxd/idxd.o 00:04:01.055 CC lib/env_dpdk/pci_ioat.o 00:04:01.055 CC lib/idxd/idxd_user.o 00:04:01.055 CC lib/env_dpdk/pci_virtio.o 00:04:01.055 CC lib/env_dpdk/pci_vmd.o 00:04:01.055 CC lib/vmd/vmd.o 00:04:01.055 CC lib/idxd/idxd_kernel.o 00:04:01.055 CC lib/vmd/led.o 00:04:01.055 CC lib/rdma_utils/rdma_utils.o 00:04:01.055 CC lib/env_dpdk/pci_idxd.o 00:04:01.055 CC lib/env_dpdk/pci_event.o 00:04:01.055 CC lib/conf/conf.o 00:04:01.055 CC lib/env_dpdk/sigbus_handler.o 00:04:01.055 CC lib/env_dpdk/pci_dpdk.o 00:04:01.055 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:01.055 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:01.313 LIB libspdk_conf.a 00:04:01.313 SO libspdk_conf.so.6.0 00:04:01.313 LIB libspdk_json.a 00:04:01.313 SO libspdk_json.so.6.0 00:04:01.313 LIB libspdk_rdma_utils.a 00:04:01.313 SYMLINK libspdk_conf.so 00:04:01.571 SO libspdk_rdma_utils.so.1.0 00:04:01.571 SYMLINK libspdk_json.so 00:04:01.571 SYMLINK libspdk_rdma_utils.so 00:04:01.571 LIB libspdk_idxd.a 00:04:01.571 LIB libspdk_vmd.a 00:04:01.571 SO libspdk_idxd.so.12.1 00:04:01.571 SO libspdk_vmd.so.6.0 00:04:01.830 SYMLINK libspdk_idxd.so 00:04:01.830 SYMLINK libspdk_vmd.so 00:04:01.830 CC lib/jsonrpc/jsonrpc_server.o 00:04:01.830 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:01.830 CC lib/jsonrpc/jsonrpc_client.o 00:04:01.830 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:01.830 CC lib/rdma_provider/common.o 00:04:01.830 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:02.089 LIB libspdk_jsonrpc.a 00:04:02.089 LIB libspdk_rdma_provider.a 00:04:02.089 SO libspdk_jsonrpc.so.6.0 00:04:02.089 SO libspdk_rdma_provider.so.7.0 00:04:02.089 SYMLINK libspdk_jsonrpc.so 00:04:02.089 SYMLINK libspdk_rdma_provider.so 00:04:02.089 LIB libspdk_env_dpdk.a 00:04:02.347 SO libspdk_env_dpdk.so.15.1 00:04:02.347 SYMLINK libspdk_env_dpdk.so 00:04:02.347 CC lib/rpc/rpc.o 00:04:02.606 LIB libspdk_rpc.a 00:04:02.606 SO libspdk_rpc.so.6.0 00:04:02.606 SYMLINK libspdk_rpc.so 00:04:02.865 CC lib/trace/trace.o 00:04:02.865 CC lib/trace/trace_flags.o 00:04:02.865 CC lib/notify/notify.o 00:04:02.865 CC lib/notify/notify_rpc.o 00:04:02.865 CC lib/trace/trace_rpc.o 00:04:02.865 CC lib/keyring/keyring.o 00:04:02.865 CC lib/keyring/keyring_rpc.o 00:04:03.124 LIB libspdk_notify.a 00:04:03.124 SO libspdk_notify.so.6.0 00:04:03.124 LIB libspdk_keyring.a 00:04:03.124 LIB libspdk_trace.a 00:04:03.124 SO libspdk_keyring.so.2.0 00:04:03.124 SYMLINK libspdk_notify.so 00:04:03.124 SO libspdk_trace.so.11.0 00:04:03.383 SYMLINK libspdk_keyring.so 00:04:03.383 SYMLINK libspdk_trace.so 00:04:03.643 CC lib/thread/thread.o 00:04:03.643 CC lib/thread/iobuf.o 00:04:03.643 CC lib/sock/sock.o 00:04:03.643 CC lib/sock/sock_rpc.o 00:04:03.902 LIB libspdk_sock.a 00:04:03.902 SO libspdk_sock.so.10.0 00:04:03.902 SYMLINK libspdk_sock.so 00:04:04.470 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:04.470 CC lib/nvme/nvme_ctrlr.o 00:04:04.470 CC lib/nvme/nvme_fabric.o 00:04:04.470 CC lib/nvme/nvme_ns_cmd.o 00:04:04.470 CC lib/nvme/nvme_ns.o 00:04:04.470 CC lib/nvme/nvme_pcie_common.o 00:04:04.470 CC lib/nvme/nvme_pcie.o 00:04:04.470 CC lib/nvme/nvme_qpair.o 00:04:04.470 CC lib/nvme/nvme.o 00:04:04.470 CC lib/nvme/nvme_quirks.o 00:04:04.470 CC lib/nvme/nvme_transport.o 00:04:04.470 CC lib/nvme/nvme_discovery.o 00:04:04.470 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:04.470 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:04.470 CC lib/nvme/nvme_tcp.o 00:04:04.470 CC lib/nvme/nvme_opal.o 00:04:04.470 CC lib/nvme/nvme_io_msg.o 00:04:04.470 CC lib/nvme/nvme_poll_group.o 00:04:04.470 CC lib/nvme/nvme_zns.o 00:04:04.470 CC lib/nvme/nvme_stubs.o 00:04:04.470 CC lib/nvme/nvme_auth.o 00:04:04.470 CC lib/nvme/nvme_cuse.o 00:04:04.470 CC lib/nvme/nvme_vfio_user.o 00:04:04.470 CC lib/nvme/nvme_rdma.o 00:04:04.729 LIB libspdk_thread.a 00:04:04.729 SO libspdk_thread.so.11.0 00:04:04.729 SYMLINK libspdk_thread.so 00:04:04.988 CC lib/virtio/virtio.o 00:04:04.988 CC lib/virtio/virtio_vhost_user.o 00:04:04.988 CC lib/virtio/virtio_vfio_user.o 00:04:04.988 CC lib/virtio/virtio_pci.o 00:04:04.988 CC lib/blob/blobstore.o 00:04:04.988 CC lib/init/json_config.o 00:04:04.988 CC lib/blob/request.o 00:04:04.988 CC lib/init/subsystem.o 00:04:04.988 CC lib/blob/zeroes.o 00:04:04.988 CC lib/init/subsystem_rpc.o 00:04:04.988 CC lib/blob/blob_bs_dev.o 00:04:04.988 CC lib/init/rpc.o 00:04:04.988 CC lib/accel/accel.o 00:04:04.988 CC lib/accel/accel_rpc.o 00:04:04.988 CC lib/accel/accel_sw.o 00:04:04.988 CC lib/fsdev/fsdev.o 00:04:04.988 CC lib/fsdev/fsdev_io.o 00:04:04.988 CC lib/fsdev/fsdev_rpc.o 00:04:04.988 CC lib/vfu_tgt/tgt_endpoint.o 00:04:04.988 CC lib/vfu_tgt/tgt_rpc.o 00:04:05.246 LIB libspdk_init.a 00:04:05.246 LIB libspdk_virtio.a 00:04:05.246 SO libspdk_init.so.6.0 00:04:05.246 SO libspdk_virtio.so.7.0 00:04:05.246 LIB libspdk_vfu_tgt.a 00:04:05.505 SO libspdk_vfu_tgt.so.3.0 00:04:05.505 SYMLINK libspdk_init.so 00:04:05.505 SYMLINK libspdk_virtio.so 00:04:05.505 SYMLINK libspdk_vfu_tgt.so 00:04:05.505 LIB libspdk_fsdev.a 00:04:05.764 SO libspdk_fsdev.so.2.0 00:04:05.764 SYMLINK libspdk_fsdev.so 00:04:05.764 CC lib/event/app.o 00:04:05.764 CC lib/event/reactor.o 00:04:05.764 CC lib/event/log_rpc.o 00:04:05.764 CC lib/event/app_rpc.o 00:04:05.764 CC lib/event/scheduler_static.o 00:04:06.023 LIB libspdk_accel.a 00:04:06.023 SO libspdk_accel.so.16.0 00:04:06.023 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:06.023 SYMLINK libspdk_accel.so 00:04:06.023 LIB libspdk_event.a 00:04:06.023 LIB libspdk_nvme.a 00:04:06.023 SO libspdk_event.so.14.0 00:04:06.282 SYMLINK libspdk_event.so 00:04:06.282 SO libspdk_nvme.so.15.0 00:04:06.282 CC lib/bdev/bdev.o 00:04:06.282 CC lib/bdev/bdev_rpc.o 00:04:06.282 CC lib/bdev/bdev_zone.o 00:04:06.282 CC lib/bdev/part.o 00:04:06.282 CC lib/bdev/scsi_nvme.o 00:04:06.282 SYMLINK libspdk_nvme.so 00:04:06.541 LIB libspdk_fuse_dispatcher.a 00:04:06.541 SO libspdk_fuse_dispatcher.so.1.0 00:04:06.541 SYMLINK libspdk_fuse_dispatcher.so 00:04:07.108 LIB libspdk_blob.a 00:04:07.366 SO libspdk_blob.so.12.0 00:04:07.366 SYMLINK libspdk_blob.so 00:04:07.624 CC lib/lvol/lvol.o 00:04:07.624 CC lib/blobfs/blobfs.o 00:04:07.624 CC lib/blobfs/tree.o 00:04:08.191 LIB libspdk_bdev.a 00:04:08.191 SO libspdk_bdev.so.17.0 00:04:08.191 LIB libspdk_blobfs.a 00:04:08.191 SO libspdk_blobfs.so.11.0 00:04:08.191 LIB libspdk_lvol.a 00:04:08.191 SYMLINK libspdk_bdev.so 00:04:08.449 SO libspdk_lvol.so.11.0 00:04:08.449 SYMLINK libspdk_blobfs.so 00:04:08.449 SYMLINK libspdk_lvol.so 00:04:08.710 CC lib/scsi/dev.o 00:04:08.710 CC lib/scsi/lun.o 00:04:08.710 CC lib/scsi/port.o 00:04:08.710 CC lib/scsi/scsi.o 00:04:08.710 CC lib/scsi/scsi_bdev.o 00:04:08.710 CC lib/scsi/scsi_pr.o 00:04:08.710 CC lib/scsi/scsi_rpc.o 00:04:08.710 CC lib/scsi/task.o 00:04:08.710 CC lib/nvmf/ctrlr.o 00:04:08.710 CC lib/ftl/ftl_core.o 00:04:08.710 CC lib/nvmf/ctrlr_discovery.o 00:04:08.710 CC lib/ftl/ftl_init.o 00:04:08.710 CC lib/nvmf/subsystem.o 00:04:08.710 CC lib/nvmf/ctrlr_bdev.o 00:04:08.710 CC lib/ublk/ublk.o 00:04:08.710 CC lib/nbd/nbd.o 00:04:08.710 CC lib/nbd/nbd_rpc.o 00:04:08.710 CC lib/ftl/ftl_layout.o 00:04:08.710 CC lib/ftl/ftl_debug.o 00:04:08.710 CC lib/ublk/ublk_rpc.o 00:04:08.710 CC lib/nvmf/nvmf.o 00:04:08.710 CC lib/ftl/ftl_io.o 00:04:08.710 CC lib/nvmf/nvmf_rpc.o 00:04:08.710 CC lib/ftl/ftl_sb.o 00:04:08.710 CC lib/nvmf/transport.o 00:04:08.710 CC lib/nvmf/tcp.o 00:04:08.710 CC lib/ftl/ftl_l2p.o 00:04:08.710 CC lib/nvmf/stubs.o 00:04:08.710 CC lib/ftl/ftl_l2p_flat.o 00:04:08.710 CC lib/ftl/ftl_band.o 00:04:08.710 CC lib/nvmf/mdns_server.o 00:04:08.710 CC lib/ftl/ftl_nv_cache.o 00:04:08.710 CC lib/nvmf/vfio_user.o 00:04:08.710 CC lib/nvmf/rdma.o 00:04:08.710 CC lib/ftl/ftl_band_ops.o 00:04:08.710 CC lib/nvmf/auth.o 00:04:08.710 CC lib/ftl/ftl_rq.o 00:04:08.710 CC lib/ftl/ftl_writer.o 00:04:08.710 CC lib/ftl/ftl_reloc.o 00:04:08.710 CC lib/ftl/ftl_p2l_log.o 00:04:08.710 CC lib/ftl/ftl_l2p_cache.o 00:04:08.710 CC lib/ftl/ftl_p2l.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:08.710 CC lib/ftl/utils/ftl_conf.o 00:04:08.710 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:08.710 CC lib/ftl/utils/ftl_bitmap.o 00:04:08.710 CC lib/ftl/utils/ftl_md.o 00:04:08.710 CC lib/ftl/utils/ftl_mempool.o 00:04:08.710 CC lib/ftl/utils/ftl_property.o 00:04:08.710 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:08.710 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:08.710 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:08.710 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:08.710 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:08.710 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:08.710 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:08.710 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:08.710 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:08.710 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:08.710 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:08.710 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:08.710 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:08.710 CC lib/ftl/base/ftl_base_bdev.o 00:04:08.710 CC lib/ftl/base/ftl_base_dev.o 00:04:08.710 CC lib/ftl/ftl_trace.o 00:04:09.276 LIB libspdk_nbd.a 00:04:09.276 SO libspdk_nbd.so.7.0 00:04:09.276 SYMLINK libspdk_nbd.so 00:04:09.276 LIB libspdk_scsi.a 00:04:09.276 SO libspdk_scsi.so.9.0 00:04:09.276 LIB libspdk_ublk.a 00:04:09.276 SYMLINK libspdk_scsi.so 00:04:09.276 SO libspdk_ublk.so.3.0 00:04:09.533 SYMLINK libspdk_ublk.so 00:04:09.533 LIB libspdk_ftl.a 00:04:09.792 CC lib/iscsi/conn.o 00:04:09.792 CC lib/iscsi/init_grp.o 00:04:09.792 CC lib/iscsi/iscsi.o 00:04:09.792 CC lib/iscsi/param.o 00:04:09.792 CC lib/vhost/vhost.o 00:04:09.792 CC lib/iscsi/portal_grp.o 00:04:09.792 CC lib/vhost/vhost_rpc.o 00:04:09.792 CC lib/iscsi/tgt_node.o 00:04:09.792 CC lib/vhost/vhost_scsi.o 00:04:09.792 CC lib/iscsi/iscsi_subsystem.o 00:04:09.792 CC lib/vhost/vhost_blk.o 00:04:09.792 CC lib/iscsi/iscsi_rpc.o 00:04:09.792 CC lib/vhost/rte_vhost_user.o 00:04:09.792 CC lib/iscsi/task.o 00:04:09.792 SO libspdk_ftl.so.9.0 00:04:10.050 SYMLINK libspdk_ftl.so 00:04:10.615 LIB libspdk_nvmf.a 00:04:10.615 LIB libspdk_vhost.a 00:04:10.615 SO libspdk_vhost.so.8.0 00:04:10.615 SO libspdk_nvmf.so.20.0 00:04:10.615 SYMLINK libspdk_vhost.so 00:04:10.615 LIB libspdk_iscsi.a 00:04:10.615 SYMLINK libspdk_nvmf.so 00:04:10.615 SO libspdk_iscsi.so.8.0 00:04:10.874 SYMLINK libspdk_iscsi.so 00:04:11.442 CC module/env_dpdk/env_dpdk_rpc.o 00:04:11.442 CC module/vfu_device/vfu_virtio_scsi.o 00:04:11.442 CC module/vfu_device/vfu_virtio.o 00:04:11.442 CC module/vfu_device/vfu_virtio_blk.o 00:04:11.442 CC module/vfu_device/vfu_virtio_rpc.o 00:04:11.442 CC module/vfu_device/vfu_virtio_fs.o 00:04:11.442 CC module/keyring/linux/keyring.o 00:04:11.442 CC module/sock/posix/posix.o 00:04:11.442 CC module/keyring/linux/keyring_rpc.o 00:04:11.442 CC module/accel/ioat/accel_ioat.o 00:04:11.442 CC module/accel/ioat/accel_ioat_rpc.o 00:04:11.442 CC module/accel/iaa/accel_iaa.o 00:04:11.442 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:11.442 CC module/accel/iaa/accel_iaa_rpc.o 00:04:11.442 CC module/blob/bdev/blob_bdev.o 00:04:11.442 LIB libspdk_env_dpdk_rpc.a 00:04:11.442 CC module/scheduler/gscheduler/gscheduler.o 00:04:11.442 CC module/keyring/file/keyring.o 00:04:11.442 CC module/keyring/file/keyring_rpc.o 00:04:11.442 CC module/accel/error/accel_error.o 00:04:11.442 CC module/accel/error/accel_error_rpc.o 00:04:11.442 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:11.442 CC module/fsdev/aio/linux_aio_mgr.o 00:04:11.442 CC module/fsdev/aio/fsdev_aio.o 00:04:11.442 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:11.442 CC module/accel/dsa/accel_dsa.o 00:04:11.442 CC module/accel/dsa/accel_dsa_rpc.o 00:04:11.442 SO libspdk_env_dpdk_rpc.so.6.0 00:04:11.700 SYMLINK libspdk_env_dpdk_rpc.so 00:04:11.700 LIB libspdk_keyring_linux.a 00:04:11.700 LIB libspdk_keyring_file.a 00:04:11.700 LIB libspdk_scheduler_gscheduler.a 00:04:11.700 SO libspdk_keyring_linux.so.1.0 00:04:11.700 LIB libspdk_scheduler_dpdk_governor.a 00:04:11.700 LIB libspdk_accel_ioat.a 00:04:11.700 SO libspdk_scheduler_gscheduler.so.4.0 00:04:11.700 SO libspdk_keyring_file.so.2.0 00:04:11.700 LIB libspdk_accel_iaa.a 00:04:11.700 LIB libspdk_scheduler_dynamic.a 00:04:11.700 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:11.700 SO libspdk_accel_iaa.so.3.0 00:04:11.700 LIB libspdk_accel_error.a 00:04:11.700 SO libspdk_accel_ioat.so.6.0 00:04:11.700 SYMLINK libspdk_keyring_linux.so 00:04:11.700 SO libspdk_scheduler_dynamic.so.4.0 00:04:11.700 SYMLINK libspdk_keyring_file.so 00:04:11.700 SYMLINK libspdk_scheduler_gscheduler.so 00:04:11.700 SO libspdk_accel_error.so.2.0 00:04:11.700 LIB libspdk_blob_bdev.a 00:04:11.700 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:11.700 SYMLINK libspdk_accel_iaa.so 00:04:11.700 SYMLINK libspdk_accel_ioat.so 00:04:11.700 LIB libspdk_accel_dsa.a 00:04:11.700 SO libspdk_blob_bdev.so.12.0 00:04:11.700 SYMLINK libspdk_scheduler_dynamic.so 00:04:11.959 SYMLINK libspdk_accel_error.so 00:04:11.959 SO libspdk_accel_dsa.so.5.0 00:04:11.959 SYMLINK libspdk_blob_bdev.so 00:04:11.959 LIB libspdk_vfu_device.a 00:04:11.959 SYMLINK libspdk_accel_dsa.so 00:04:11.959 SO libspdk_vfu_device.so.3.0 00:04:11.959 SYMLINK libspdk_vfu_device.so 00:04:11.959 LIB libspdk_fsdev_aio.a 00:04:12.218 LIB libspdk_sock_posix.a 00:04:12.218 SO libspdk_fsdev_aio.so.1.0 00:04:12.218 SO libspdk_sock_posix.so.6.0 00:04:12.218 SYMLINK libspdk_fsdev_aio.so 00:04:12.218 SYMLINK libspdk_sock_posix.so 00:04:12.218 CC module/bdev/gpt/gpt.o 00:04:12.218 CC module/bdev/gpt/vbdev_gpt.o 00:04:12.218 CC module/bdev/passthru/vbdev_passthru.o 00:04:12.218 CC module/bdev/malloc/bdev_malloc.o 00:04:12.218 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:12.218 CC module/bdev/lvol/vbdev_lvol.o 00:04:12.218 CC module/blobfs/bdev/blobfs_bdev.o 00:04:12.218 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:12.218 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:12.218 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:12.218 CC module/bdev/delay/vbdev_delay.o 00:04:12.218 CC module/bdev/error/vbdev_error_rpc.o 00:04:12.218 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:12.218 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:12.218 CC module/bdev/error/vbdev_error.o 00:04:12.218 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:12.218 CC module/bdev/null/bdev_null.o 00:04:12.476 CC module/bdev/null/bdev_null_rpc.o 00:04:12.476 CC module/bdev/aio/bdev_aio.o 00:04:12.476 CC module/bdev/aio/bdev_aio_rpc.o 00:04:12.476 CC module/bdev/raid/bdev_raid_rpc.o 00:04:12.476 CC module/bdev/raid/bdev_raid.o 00:04:12.476 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:12.476 CC module/bdev/raid/raid1.o 00:04:12.476 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:12.476 CC module/bdev/raid/bdev_raid_sb.o 00:04:12.476 CC module/bdev/raid/raid0.o 00:04:12.476 CC module/bdev/raid/concat.o 00:04:12.476 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:12.476 CC module/bdev/nvme/bdev_nvme.o 00:04:12.476 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:12.476 CC module/bdev/nvme/nvme_rpc.o 00:04:12.476 CC module/bdev/ftl/bdev_ftl.o 00:04:12.476 CC module/bdev/nvme/bdev_mdns_client.o 00:04:12.476 CC module/bdev/nvme/vbdev_opal.o 00:04:12.476 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:12.476 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:12.476 CC module/bdev/split/vbdev_split_rpc.o 00:04:12.476 CC module/bdev/split/vbdev_split.o 00:04:12.476 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:12.476 CC module/bdev/iscsi/bdev_iscsi.o 00:04:12.476 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:12.476 LIB libspdk_blobfs_bdev.a 00:04:12.734 SO libspdk_blobfs_bdev.so.6.0 00:04:12.734 LIB libspdk_bdev_split.a 00:04:12.734 LIB libspdk_bdev_error.a 00:04:12.734 LIB libspdk_bdev_gpt.a 00:04:12.734 SYMLINK libspdk_blobfs_bdev.so 00:04:12.734 LIB libspdk_bdev_null.a 00:04:12.734 SO libspdk_bdev_split.so.6.0 00:04:12.734 SO libspdk_bdev_error.so.6.0 00:04:12.734 LIB libspdk_bdev_ftl.a 00:04:12.734 SO libspdk_bdev_gpt.so.6.0 00:04:12.734 LIB libspdk_bdev_zone_block.a 00:04:12.734 SO libspdk_bdev_null.so.6.0 00:04:12.734 LIB libspdk_bdev_passthru.a 00:04:12.734 SO libspdk_bdev_ftl.so.6.0 00:04:12.734 LIB libspdk_bdev_aio.a 00:04:12.734 SYMLINK libspdk_bdev_split.so 00:04:12.734 SO libspdk_bdev_zone_block.so.6.0 00:04:12.734 SYMLINK libspdk_bdev_error.so 00:04:12.734 LIB libspdk_bdev_delay.a 00:04:12.734 SO libspdk_bdev_passthru.so.6.0 00:04:12.734 LIB libspdk_bdev_malloc.a 00:04:12.734 SYMLINK libspdk_bdev_gpt.so 00:04:12.734 SO libspdk_bdev_aio.so.6.0 00:04:12.734 SYMLINK libspdk_bdev_null.so 00:04:12.734 SO libspdk_bdev_delay.so.6.0 00:04:12.734 LIB libspdk_bdev_iscsi.a 00:04:12.734 SYMLINK libspdk_bdev_ftl.so 00:04:12.734 SO libspdk_bdev_malloc.so.6.0 00:04:12.734 SYMLINK libspdk_bdev_passthru.so 00:04:12.734 SYMLINK libspdk_bdev_zone_block.so 00:04:12.734 SO libspdk_bdev_iscsi.so.6.0 00:04:12.734 SYMLINK libspdk_bdev_aio.so 00:04:12.734 SYMLINK libspdk_bdev_delay.so 00:04:12.734 LIB libspdk_bdev_lvol.a 00:04:12.993 LIB libspdk_bdev_virtio.a 00:04:12.993 SYMLINK libspdk_bdev_malloc.so 00:04:12.993 SYMLINK libspdk_bdev_iscsi.so 00:04:12.993 SO libspdk_bdev_lvol.so.6.0 00:04:12.993 SO libspdk_bdev_virtio.so.6.0 00:04:12.993 SYMLINK libspdk_bdev_lvol.so 00:04:12.993 SYMLINK libspdk_bdev_virtio.so 00:04:13.252 LIB libspdk_bdev_raid.a 00:04:13.252 SO libspdk_bdev_raid.so.6.0 00:04:13.252 SYMLINK libspdk_bdev_raid.so 00:04:14.188 LIB libspdk_bdev_nvme.a 00:04:14.188 SO libspdk_bdev_nvme.so.7.1 00:04:14.447 SYMLINK libspdk_bdev_nvme.so 00:04:15.015 CC module/event/subsystems/sock/sock.o 00:04:15.015 CC module/event/subsystems/vmd/vmd.o 00:04:15.015 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:15.015 CC module/event/subsystems/iobuf/iobuf.o 00:04:15.015 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:15.015 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:15.015 CC module/event/subsystems/fsdev/fsdev.o 00:04:15.015 CC module/event/subsystems/keyring/keyring.o 00:04:15.015 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:15.015 CC module/event/subsystems/scheduler/scheduler.o 00:04:15.015 LIB libspdk_event_sock.a 00:04:15.015 LIB libspdk_event_vhost_blk.a 00:04:15.015 LIB libspdk_event_fsdev.a 00:04:15.275 LIB libspdk_event_scheduler.a 00:04:15.275 LIB libspdk_event_vmd.a 00:04:15.275 SO libspdk_event_sock.so.5.0 00:04:15.275 LIB libspdk_event_keyring.a 00:04:15.275 LIB libspdk_event_iobuf.a 00:04:15.275 SO libspdk_event_vhost_blk.so.3.0 00:04:15.275 LIB libspdk_event_vfu_tgt.a 00:04:15.275 SO libspdk_event_scheduler.so.4.0 00:04:15.275 SO libspdk_event_fsdev.so.1.0 00:04:15.275 SO libspdk_event_vmd.so.6.0 00:04:15.275 SO libspdk_event_keyring.so.1.0 00:04:15.275 SO libspdk_event_iobuf.so.3.0 00:04:15.275 SO libspdk_event_vfu_tgt.so.3.0 00:04:15.275 SYMLINK libspdk_event_sock.so 00:04:15.275 SYMLINK libspdk_event_vhost_blk.so 00:04:15.275 SYMLINK libspdk_event_scheduler.so 00:04:15.275 SYMLINK libspdk_event_fsdev.so 00:04:15.275 SYMLINK libspdk_event_keyring.so 00:04:15.275 SYMLINK libspdk_event_vmd.so 00:04:15.275 SYMLINK libspdk_event_iobuf.so 00:04:15.275 SYMLINK libspdk_event_vfu_tgt.so 00:04:15.534 CC module/event/subsystems/accel/accel.o 00:04:15.793 LIB libspdk_event_accel.a 00:04:15.793 SO libspdk_event_accel.so.6.0 00:04:15.793 SYMLINK libspdk_event_accel.so 00:04:16.052 CC module/event/subsystems/bdev/bdev.o 00:04:16.311 LIB libspdk_event_bdev.a 00:04:16.311 SO libspdk_event_bdev.so.6.0 00:04:16.311 SYMLINK libspdk_event_bdev.so 00:04:16.570 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:16.570 CC module/event/subsystems/scsi/scsi.o 00:04:16.570 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:16.570 CC module/event/subsystems/nbd/nbd.o 00:04:16.570 CC module/event/subsystems/ublk/ublk.o 00:04:16.829 LIB libspdk_event_nbd.a 00:04:16.829 LIB libspdk_event_ublk.a 00:04:16.829 LIB libspdk_event_scsi.a 00:04:16.829 SO libspdk_event_ublk.so.3.0 00:04:16.829 SO libspdk_event_scsi.so.6.0 00:04:16.829 SO libspdk_event_nbd.so.6.0 00:04:16.829 LIB libspdk_event_nvmf.a 00:04:16.829 SYMLINK libspdk_event_ublk.so 00:04:16.829 SYMLINK libspdk_event_scsi.so 00:04:16.829 SYMLINK libspdk_event_nbd.so 00:04:16.829 SO libspdk_event_nvmf.so.6.0 00:04:17.089 SYMLINK libspdk_event_nvmf.so 00:04:17.350 CC module/event/subsystems/iscsi/iscsi.o 00:04:17.350 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:17.350 LIB libspdk_event_vhost_scsi.a 00:04:17.350 LIB libspdk_event_iscsi.a 00:04:17.350 SO libspdk_event_vhost_scsi.so.3.0 00:04:17.350 SO libspdk_event_iscsi.so.6.0 00:04:17.350 SYMLINK libspdk_event_vhost_scsi.so 00:04:17.609 SYMLINK libspdk_event_iscsi.so 00:04:17.609 SO libspdk.so.6.0 00:04:17.609 SYMLINK libspdk.so 00:04:18.182 CXX app/trace/trace.o 00:04:18.182 CC app/trace_record/trace_record.o 00:04:18.182 CC app/spdk_nvme_discover/discovery_aer.o 00:04:18.182 TEST_HEADER include/spdk/accel.h 00:04:18.182 TEST_HEADER include/spdk/accel_module.h 00:04:18.182 TEST_HEADER include/spdk/assert.h 00:04:18.182 TEST_HEADER include/spdk/barrier.h 00:04:18.182 TEST_HEADER include/spdk/base64.h 00:04:18.182 TEST_HEADER include/spdk/bdev_module.h 00:04:18.182 TEST_HEADER include/spdk/bdev.h 00:04:18.182 TEST_HEADER include/spdk/bdev_zone.h 00:04:18.182 CC test/rpc_client/rpc_client_test.o 00:04:18.182 TEST_HEADER include/spdk/bit_array.h 00:04:18.182 TEST_HEADER include/spdk/blob_bdev.h 00:04:18.182 TEST_HEADER include/spdk/bit_pool.h 00:04:18.182 CC app/spdk_top/spdk_top.o 00:04:18.182 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:18.182 TEST_HEADER include/spdk/blobfs.h 00:04:18.182 TEST_HEADER include/spdk/blob.h 00:04:18.182 TEST_HEADER include/spdk/conf.h 00:04:18.182 TEST_HEADER include/spdk/cpuset.h 00:04:18.182 TEST_HEADER include/spdk/config.h 00:04:18.182 CC app/spdk_nvme_perf/perf.o 00:04:18.182 TEST_HEADER include/spdk/crc64.h 00:04:18.182 TEST_HEADER include/spdk/crc16.h 00:04:18.182 TEST_HEADER include/spdk/dif.h 00:04:18.182 CC app/spdk_lspci/spdk_lspci.o 00:04:18.182 CC app/spdk_nvme_identify/identify.o 00:04:18.182 TEST_HEADER include/spdk/crc32.h 00:04:18.182 TEST_HEADER include/spdk/dma.h 00:04:18.182 TEST_HEADER include/spdk/endian.h 00:04:18.182 TEST_HEADER include/spdk/env_dpdk.h 00:04:18.182 TEST_HEADER include/spdk/env.h 00:04:18.182 TEST_HEADER include/spdk/fd_group.h 00:04:18.182 TEST_HEADER include/spdk/event.h 00:04:18.182 TEST_HEADER include/spdk/fd.h 00:04:18.182 TEST_HEADER include/spdk/file.h 00:04:18.182 TEST_HEADER include/spdk/fsdev.h 00:04:18.182 TEST_HEADER include/spdk/fsdev_module.h 00:04:18.182 TEST_HEADER include/spdk/ftl.h 00:04:18.182 TEST_HEADER include/spdk/gpt_spec.h 00:04:18.182 TEST_HEADER include/spdk/hexlify.h 00:04:18.182 TEST_HEADER include/spdk/histogram_data.h 00:04:18.182 TEST_HEADER include/spdk/idxd.h 00:04:18.182 TEST_HEADER include/spdk/ioat.h 00:04:18.182 TEST_HEADER include/spdk/idxd_spec.h 00:04:18.182 TEST_HEADER include/spdk/init.h 00:04:18.182 TEST_HEADER include/spdk/iscsi_spec.h 00:04:18.182 TEST_HEADER include/spdk/ioat_spec.h 00:04:18.182 TEST_HEADER include/spdk/json.h 00:04:18.182 TEST_HEADER include/spdk/jsonrpc.h 00:04:18.182 TEST_HEADER include/spdk/keyring.h 00:04:18.182 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:18.182 TEST_HEADER include/spdk/likely.h 00:04:18.182 TEST_HEADER include/spdk/keyring_module.h 00:04:18.182 TEST_HEADER include/spdk/log.h 00:04:18.182 TEST_HEADER include/spdk/lvol.h 00:04:18.182 TEST_HEADER include/spdk/md5.h 00:04:18.182 TEST_HEADER include/spdk/memory.h 00:04:18.182 CC app/nvmf_tgt/nvmf_main.o 00:04:18.182 TEST_HEADER include/spdk/net.h 00:04:18.182 TEST_HEADER include/spdk/nbd.h 00:04:18.182 TEST_HEADER include/spdk/mmio.h 00:04:18.182 TEST_HEADER include/spdk/nvme_intel.h 00:04:18.182 TEST_HEADER include/spdk/notify.h 00:04:18.182 TEST_HEADER include/spdk/nvme.h 00:04:18.182 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:18.182 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:18.182 TEST_HEADER include/spdk/nvme_spec.h 00:04:18.182 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:18.182 TEST_HEADER include/spdk/nvme_zns.h 00:04:18.182 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:18.182 TEST_HEADER include/spdk/nvmf.h 00:04:18.182 TEST_HEADER include/spdk/nvmf_transport.h 00:04:18.182 TEST_HEADER include/spdk/opal.h 00:04:18.182 TEST_HEADER include/spdk/opal_spec.h 00:04:18.182 TEST_HEADER include/spdk/nvmf_spec.h 00:04:18.182 TEST_HEADER include/spdk/pipe.h 00:04:18.182 TEST_HEADER include/spdk/pci_ids.h 00:04:18.182 TEST_HEADER include/spdk/queue.h 00:04:18.182 TEST_HEADER include/spdk/reduce.h 00:04:18.182 TEST_HEADER include/spdk/scheduler.h 00:04:18.182 TEST_HEADER include/spdk/rpc.h 00:04:18.182 CC app/spdk_dd/spdk_dd.o 00:04:18.182 TEST_HEADER include/spdk/scsi_spec.h 00:04:18.182 TEST_HEADER include/spdk/scsi.h 00:04:18.182 TEST_HEADER include/spdk/sock.h 00:04:18.182 TEST_HEADER include/spdk/string.h 00:04:18.182 TEST_HEADER include/spdk/stdinc.h 00:04:18.182 TEST_HEADER include/spdk/thread.h 00:04:18.182 CC app/iscsi_tgt/iscsi_tgt.o 00:04:18.182 TEST_HEADER include/spdk/trace.h 00:04:18.182 TEST_HEADER include/spdk/trace_parser.h 00:04:18.182 TEST_HEADER include/spdk/tree.h 00:04:18.182 TEST_HEADER include/spdk/ublk.h 00:04:18.182 TEST_HEADER include/spdk/util.h 00:04:18.182 TEST_HEADER include/spdk/uuid.h 00:04:18.182 TEST_HEADER include/spdk/version.h 00:04:18.182 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:18.182 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:18.182 TEST_HEADER include/spdk/xor.h 00:04:18.182 TEST_HEADER include/spdk/vmd.h 00:04:18.182 TEST_HEADER include/spdk/zipf.h 00:04:18.182 TEST_HEADER include/spdk/vhost.h 00:04:18.182 CXX test/cpp_headers/accel.o 00:04:18.182 CXX test/cpp_headers/accel_module.o 00:04:18.182 CXX test/cpp_headers/assert.o 00:04:18.182 CXX test/cpp_headers/base64.o 00:04:18.182 CXX test/cpp_headers/bdev.o 00:04:18.182 CXX test/cpp_headers/barrier.o 00:04:18.182 CXX test/cpp_headers/bdev_module.o 00:04:18.182 CXX test/cpp_headers/bdev_zone.o 00:04:18.182 CXX test/cpp_headers/bit_pool.o 00:04:18.182 CXX test/cpp_headers/bit_array.o 00:04:18.182 CXX test/cpp_headers/blob_bdev.o 00:04:18.182 CXX test/cpp_headers/blobfs.o 00:04:18.182 CXX test/cpp_headers/blobfs_bdev.o 00:04:18.182 CXX test/cpp_headers/blob.o 00:04:18.182 CXX test/cpp_headers/conf.o 00:04:18.182 CXX test/cpp_headers/config.o 00:04:18.182 CXX test/cpp_headers/cpuset.o 00:04:18.182 CXX test/cpp_headers/crc16.o 00:04:18.182 CXX test/cpp_headers/crc32.o 00:04:18.182 CXX test/cpp_headers/crc64.o 00:04:18.182 CXX test/cpp_headers/dif.o 00:04:18.182 CXX test/cpp_headers/dma.o 00:04:18.182 CXX test/cpp_headers/env_dpdk.o 00:04:18.182 CXX test/cpp_headers/env.o 00:04:18.182 CXX test/cpp_headers/endian.o 00:04:18.182 CXX test/cpp_headers/event.o 00:04:18.182 CXX test/cpp_headers/fd_group.o 00:04:18.182 CXX test/cpp_headers/fd.o 00:04:18.182 CC app/spdk_tgt/spdk_tgt.o 00:04:18.182 CXX test/cpp_headers/file.o 00:04:18.182 CXX test/cpp_headers/fsdev.o 00:04:18.182 CXX test/cpp_headers/fsdev_module.o 00:04:18.182 CXX test/cpp_headers/ftl.o 00:04:18.182 CXX test/cpp_headers/gpt_spec.o 00:04:18.182 CXX test/cpp_headers/hexlify.o 00:04:18.182 CXX test/cpp_headers/init.o 00:04:18.182 CXX test/cpp_headers/histogram_data.o 00:04:18.182 CXX test/cpp_headers/idxd.o 00:04:18.182 CXX test/cpp_headers/idxd_spec.o 00:04:18.182 CXX test/cpp_headers/ioat.o 00:04:18.182 CXX test/cpp_headers/ioat_spec.o 00:04:18.182 CXX test/cpp_headers/iscsi_spec.o 00:04:18.182 CXX test/cpp_headers/jsonrpc.o 00:04:18.182 CXX test/cpp_headers/json.o 00:04:18.182 CXX test/cpp_headers/likely.o 00:04:18.183 CXX test/cpp_headers/keyring.o 00:04:18.183 CXX test/cpp_headers/log.o 00:04:18.183 CXX test/cpp_headers/lvol.o 00:04:18.183 CXX test/cpp_headers/keyring_module.o 00:04:18.183 CXX test/cpp_headers/memory.o 00:04:18.183 CXX test/cpp_headers/md5.o 00:04:18.183 CXX test/cpp_headers/mmio.o 00:04:18.183 CXX test/cpp_headers/nbd.o 00:04:18.183 CXX test/cpp_headers/net.o 00:04:18.183 CXX test/cpp_headers/notify.o 00:04:18.183 CXX test/cpp_headers/nvme.o 00:04:18.183 CXX test/cpp_headers/nvme_intel.o 00:04:18.183 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:18.183 CXX test/cpp_headers/nvme_ocssd.o 00:04:18.183 CXX test/cpp_headers/nvme_zns.o 00:04:18.183 CXX test/cpp_headers/nvmf_cmd.o 00:04:18.183 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:18.183 CXX test/cpp_headers/nvmf.o 00:04:18.183 CXX test/cpp_headers/nvme_spec.o 00:04:18.183 CXX test/cpp_headers/nvmf_spec.o 00:04:18.183 CXX test/cpp_headers/opal.o 00:04:18.183 CXX test/cpp_headers/nvmf_transport.o 00:04:18.183 CXX test/cpp_headers/opal_spec.o 00:04:18.183 CC test/env/memory/memory_ut.o 00:04:18.183 CC app/fio/nvme/fio_plugin.o 00:04:18.183 CC examples/ioat/verify/verify.o 00:04:18.183 CC test/env/vtophys/vtophys.o 00:04:18.183 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:18.183 CC test/thread/poller_perf/poller_perf.o 00:04:18.183 CC examples/util/zipf/zipf.o 00:04:18.183 CC examples/ioat/perf/perf.o 00:04:18.183 CC test/app/stub/stub.o 00:04:18.183 CC test/app/histogram_perf/histogram_perf.o 00:04:18.183 CC test/env/pci/pci_ut.o 00:04:18.183 CC test/app/jsoncat/jsoncat.o 00:04:18.183 CC test/app/bdev_svc/bdev_svc.o 00:04:18.448 CC test/dma/test_dma/test_dma.o 00:04:18.448 LINK spdk_lspci 00:04:18.448 CC app/fio/bdev/fio_plugin.o 00:04:18.448 LINK spdk_nvme_discover 00:04:18.448 LINK interrupt_tgt 00:04:18.712 LINK spdk_trace_record 00:04:18.713 LINK rpc_client_test 00:04:18.713 CC test/env/mem_callbacks/mem_callbacks.o 00:04:18.713 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:18.713 LINK iscsi_tgt 00:04:18.713 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:18.713 LINK vtophys 00:04:18.713 LINK nvmf_tgt 00:04:18.713 LINK poller_perf 00:04:18.713 CXX test/cpp_headers/pci_ids.o 00:04:18.713 CXX test/cpp_headers/pipe.o 00:04:18.713 CXX test/cpp_headers/queue.o 00:04:18.713 CXX test/cpp_headers/reduce.o 00:04:18.713 CXX test/cpp_headers/rpc.o 00:04:18.713 LINK env_dpdk_post_init 00:04:18.713 CXX test/cpp_headers/scheduler.o 00:04:18.713 LINK spdk_tgt 00:04:18.713 CXX test/cpp_headers/scsi.o 00:04:18.713 CXX test/cpp_headers/scsi_spec.o 00:04:18.713 CXX test/cpp_headers/sock.o 00:04:18.713 CXX test/cpp_headers/stdinc.o 00:04:18.713 CXX test/cpp_headers/string.o 00:04:18.713 CXX test/cpp_headers/thread.o 00:04:18.713 CXX test/cpp_headers/trace.o 00:04:18.713 CXX test/cpp_headers/trace_parser.o 00:04:18.713 CXX test/cpp_headers/tree.o 00:04:18.713 CXX test/cpp_headers/ublk.o 00:04:18.713 CXX test/cpp_headers/util.o 00:04:18.713 CXX test/cpp_headers/uuid.o 00:04:18.713 CXX test/cpp_headers/version.o 00:04:18.713 CXX test/cpp_headers/vfio_user_pci.o 00:04:18.713 CXX test/cpp_headers/vfio_user_spec.o 00:04:18.713 CXX test/cpp_headers/vhost.o 00:04:18.713 CXX test/cpp_headers/vmd.o 00:04:18.713 CXX test/cpp_headers/xor.o 00:04:18.713 CXX test/cpp_headers/zipf.o 00:04:18.713 LINK bdev_svc 00:04:18.972 LINK jsoncat 00:04:18.972 LINK histogram_perf 00:04:18.972 LINK zipf 00:04:18.972 LINK spdk_dd 00:04:18.972 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:18.972 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:18.972 LINK stub 00:04:18.972 LINK verify 00:04:18.972 LINK ioat_perf 00:04:18.972 LINK pci_ut 00:04:19.230 LINK spdk_trace 00:04:19.230 LINK spdk_nvme 00:04:19.230 LINK test_dma 00:04:19.230 CC test/event/reactor_perf/reactor_perf.o 00:04:19.230 CC test/event/event_perf/event_perf.o 00:04:19.230 CC test/event/app_repeat/app_repeat.o 00:04:19.230 CC test/event/reactor/reactor.o 00:04:19.230 LINK spdk_nvme_identify 00:04:19.230 LINK nvme_fuzz 00:04:19.230 CC test/event/scheduler/scheduler.o 00:04:19.230 LINK spdk_bdev 00:04:19.488 CC examples/idxd/perf/perf.o 00:04:19.488 CC examples/vmd/led/led.o 00:04:19.488 LINK vhost_fuzz 00:04:19.488 CC examples/vmd/lsvmd/lsvmd.o 00:04:19.488 CC examples/sock/hello_world/hello_sock.o 00:04:19.488 LINK spdk_top 00:04:19.488 LINK spdk_nvme_perf 00:04:19.488 LINK reactor_perf 00:04:19.488 LINK event_perf 00:04:19.488 CC examples/thread/thread/thread_ex.o 00:04:19.488 LINK reactor 00:04:19.488 LINK mem_callbacks 00:04:19.488 LINK app_repeat 00:04:19.488 CC app/vhost/vhost.o 00:04:19.488 LINK led 00:04:19.488 LINK scheduler 00:04:19.488 LINK lsvmd 00:04:19.746 LINK hello_sock 00:04:19.746 LINK vhost 00:04:19.746 LINK idxd_perf 00:04:19.746 LINK thread 00:04:19.746 CC test/nvme/err_injection/err_injection.o 00:04:19.746 CC test/nvme/connect_stress/connect_stress.o 00:04:19.746 CC test/nvme/reset/reset.o 00:04:19.746 CC test/nvme/simple_copy/simple_copy.o 00:04:19.746 CC test/nvme/fdp/fdp.o 00:04:19.746 CC test/nvme/overhead/overhead.o 00:04:19.746 CC test/nvme/aer/aer.o 00:04:19.746 CC test/nvme/reserve/reserve.o 00:04:19.746 CC test/nvme/e2edp/nvme_dp.o 00:04:19.746 CC test/nvme/boot_partition/boot_partition.o 00:04:19.746 CC test/nvme/cuse/cuse.o 00:04:19.746 CC test/nvme/fused_ordering/fused_ordering.o 00:04:19.746 CC test/nvme/compliance/nvme_compliance.o 00:04:19.746 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:19.746 CC test/nvme/sgl/sgl.o 00:04:19.746 CC test/nvme/startup/startup.o 00:04:19.746 CC test/blobfs/mkfs/mkfs.o 00:04:19.746 CC test/accel/dif/dif.o 00:04:19.746 LINK memory_ut 00:04:20.004 CC test/lvol/esnap/esnap.o 00:04:20.004 LINK err_injection 00:04:20.004 LINK boot_partition 00:04:20.004 LINK connect_stress 00:04:20.004 LINK startup 00:04:20.004 LINK fused_ordering 00:04:20.004 LINK simple_copy 00:04:20.004 LINK reserve 00:04:20.004 LINK doorbell_aers 00:04:20.004 LINK reset 00:04:20.004 LINK overhead 00:04:20.004 LINK mkfs 00:04:20.004 LINK sgl 00:04:20.004 LINK nvme_dp 00:04:20.004 LINK aer 00:04:20.004 LINK nvme_compliance 00:04:20.004 LINK fdp 00:04:20.004 CC examples/nvme/reconnect/reconnect.o 00:04:20.004 CC examples/nvme/hello_world/hello_world.o 00:04:20.004 CC examples/nvme/hotplug/hotplug.o 00:04:20.004 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:20.004 CC examples/nvme/arbitration/arbitration.o 00:04:20.004 CC examples/nvme/abort/abort.o 00:04:20.004 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:20.004 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:20.263 CC examples/accel/perf/accel_perf.o 00:04:20.263 CC examples/blob/cli/blobcli.o 00:04:20.263 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:20.263 CC examples/blob/hello_world/hello_blob.o 00:04:20.263 LINK iscsi_fuzz 00:04:20.263 LINK pmr_persistence 00:04:20.263 LINK cmb_copy 00:04:20.263 LINK hotplug 00:04:20.263 LINK hello_world 00:04:20.263 LINK dif 00:04:20.263 LINK reconnect 00:04:20.521 LINK arbitration 00:04:20.521 LINK abort 00:04:20.521 LINK hello_blob 00:04:20.521 LINK nvme_manage 00:04:20.521 LINK hello_fsdev 00:04:20.521 LINK accel_perf 00:04:20.521 LINK blobcli 00:04:20.779 LINK cuse 00:04:20.779 CC test/bdev/bdevio/bdevio.o 00:04:21.038 CC examples/bdev/hello_world/hello_bdev.o 00:04:21.038 CC examples/bdev/bdevperf/bdevperf.o 00:04:21.297 LINK bdevio 00:04:21.297 LINK hello_bdev 00:04:21.555 LINK bdevperf 00:04:22.123 CC examples/nvmf/nvmf/nvmf.o 00:04:22.381 LINK nvmf 00:04:23.760 LINK esnap 00:04:23.760 00:04:23.760 real 0m55.234s 00:04:23.760 user 8m24.440s 00:04:23.760 sys 3m48.468s 00:04:23.760 05:29:11 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:23.760 05:29:11 make -- common/autotest_common.sh@10 -- $ set +x 00:04:23.760 ************************************ 00:04:23.760 END TEST make 00:04:23.760 ************************************ 00:04:23.760 05:29:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:23.760 05:29:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:23.760 05:29:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:23.760 05:29:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.760 05:29:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:23.760 05:29:11 -- pm/common@44 -- $ pid=919844 00:04:23.760 05:29:11 -- pm/common@50 -- $ kill -TERM 919844 00:04:23.760 05:29:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.760 05:29:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:23.760 05:29:11 -- pm/common@44 -- $ pid=919845 00:04:23.760 05:29:11 -- pm/common@50 -- $ kill -TERM 919845 00:04:23.760 05:29:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.760 05:29:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:23.760 05:29:11 -- pm/common@44 -- $ pid=919847 00:04:23.760 05:29:11 -- pm/common@50 -- $ kill -TERM 919847 00:04:23.760 05:29:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.760 05:29:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:23.760 05:29:11 -- pm/common@44 -- $ pid=919874 00:04:23.760 05:29:11 -- pm/common@50 -- $ sudo -E kill -TERM 919874 00:04:23.760 05:29:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:23.760 05:29:11 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:24.080 05:29:11 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.080 05:29:11 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.080 05:29:11 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.080 05:29:11 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.080 05:29:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.080 05:29:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.080 05:29:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.080 05:29:11 -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.080 05:29:11 -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.080 05:29:11 -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.080 05:29:11 -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.080 05:29:11 -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.080 05:29:11 -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.080 05:29:11 -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.080 05:29:11 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.080 05:29:11 -- scripts/common.sh@344 -- # case "$op" in 00:04:24.080 05:29:11 -- scripts/common.sh@345 -- # : 1 00:04:24.080 05:29:11 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.080 05:29:11 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.080 05:29:11 -- scripts/common.sh@365 -- # decimal 1 00:04:24.080 05:29:11 -- scripts/common.sh@353 -- # local d=1 00:04:24.080 05:29:11 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.080 05:29:11 -- scripts/common.sh@355 -- # echo 1 00:04:24.080 05:29:11 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.080 05:29:11 -- scripts/common.sh@366 -- # decimal 2 00:04:24.080 05:29:11 -- scripts/common.sh@353 -- # local d=2 00:04:24.080 05:29:11 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.080 05:29:11 -- scripts/common.sh@355 -- # echo 2 00:04:24.080 05:29:11 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.080 05:29:11 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.080 05:29:11 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.080 05:29:11 -- scripts/common.sh@368 -- # return 0 00:04:24.080 05:29:11 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.080 05:29:11 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.080 --rc genhtml_branch_coverage=1 00:04:24.080 --rc genhtml_function_coverage=1 00:04:24.080 --rc genhtml_legend=1 00:04:24.080 --rc geninfo_all_blocks=1 00:04:24.080 --rc geninfo_unexecuted_blocks=1 00:04:24.080 00:04:24.080 ' 00:04:24.080 05:29:11 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.080 --rc genhtml_branch_coverage=1 00:04:24.080 --rc genhtml_function_coverage=1 00:04:24.080 --rc genhtml_legend=1 00:04:24.080 --rc geninfo_all_blocks=1 00:04:24.080 --rc geninfo_unexecuted_blocks=1 00:04:24.080 00:04:24.080 ' 00:04:24.080 05:29:11 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.080 --rc genhtml_branch_coverage=1 00:04:24.080 --rc genhtml_function_coverage=1 00:04:24.080 --rc genhtml_legend=1 00:04:24.080 --rc geninfo_all_blocks=1 00:04:24.080 --rc geninfo_unexecuted_blocks=1 00:04:24.080 00:04:24.080 ' 00:04:24.080 05:29:11 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.080 --rc genhtml_branch_coverage=1 00:04:24.080 --rc genhtml_function_coverage=1 00:04:24.080 --rc genhtml_legend=1 00:04:24.080 --rc geninfo_all_blocks=1 00:04:24.080 --rc geninfo_unexecuted_blocks=1 00:04:24.080 00:04:24.080 ' 00:04:24.080 05:29:11 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:24.080 05:29:11 -- nvmf/common.sh@7 -- # uname -s 00:04:24.080 05:29:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.081 05:29:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.081 05:29:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.081 05:29:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.081 05:29:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.081 05:29:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.081 05:29:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.081 05:29:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.081 05:29:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.081 05:29:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.081 05:29:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:24.081 05:29:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:24.081 05:29:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.081 05:29:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.081 05:29:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:24.081 05:29:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.081 05:29:11 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:24.081 05:29:11 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.081 05:29:11 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.081 05:29:11 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.081 05:29:11 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.081 05:29:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.081 05:29:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.081 05:29:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.081 05:29:11 -- paths/export.sh@5 -- # export PATH 00:04:24.081 05:29:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.081 05:29:11 -- nvmf/common.sh@51 -- # : 0 00:04:24.081 05:29:11 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.081 05:29:11 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.081 05:29:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.081 05:29:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.081 05:29:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.081 05:29:11 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.081 05:29:11 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.081 05:29:11 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.081 05:29:11 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.081 05:29:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:24.081 05:29:11 -- spdk/autotest.sh@32 -- # uname -s 00:04:24.081 05:29:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:24.081 05:29:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:24.081 05:29:11 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:24.081 05:29:11 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:24.081 05:29:11 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:24.081 05:29:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:24.081 05:29:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:24.081 05:29:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:24.081 05:29:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:24.081 05:29:11 -- spdk/autotest.sh@48 -- # udevadm_pid=981935 00:04:24.081 05:29:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:24.081 05:29:11 -- pm/common@17 -- # local monitor 00:04:24.081 05:29:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.081 05:29:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.081 05:29:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.081 05:29:11 -- pm/common@21 -- # date +%s 00:04:24.081 05:29:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.081 05:29:11 -- pm/common@21 -- # date +%s 00:04:24.081 05:29:11 -- pm/common@25 -- # sleep 1 00:04:24.081 05:29:11 -- pm/common@21 -- # date +%s 00:04:24.081 05:29:11 -- pm/common@21 -- # date +%s 00:04:24.081 05:29:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733804951 00:04:24.081 05:29:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733804951 00:04:24.081 05:29:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733804951 00:04:24.081 05:29:11 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733804951 00:04:24.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733804951_collect-cpu-load.pm.log 00:04:24.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733804951_collect-vmstat.pm.log 00:04:24.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733804951_collect-cpu-temp.pm.log 00:04:24.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733804951_collect-bmc-pm.bmc.pm.log 00:04:25.076 05:29:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:25.076 05:29:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:25.076 05:29:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.076 05:29:12 -- common/autotest_common.sh@10 -- # set +x 00:04:25.076 05:29:12 -- spdk/autotest.sh@59 -- # create_test_list 00:04:25.076 05:29:12 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:25.076 05:29:12 -- common/autotest_common.sh@10 -- # set +x 00:04:25.076 05:29:12 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:25.076 05:29:12 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.076 05:29:12 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.076 05:29:12 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:25.076 05:29:12 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.076 05:29:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:25.076 05:29:12 -- common/autotest_common.sh@1457 -- # uname 00:04:25.076 05:29:12 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:25.076 05:29:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:25.076 05:29:12 -- common/autotest_common.sh@1477 -- # uname 00:04:25.076 05:29:12 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:25.076 05:29:12 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:25.076 05:29:12 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:25.076 lcov: LCOV version 1.15 00:04:25.076 05:29:12 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:37.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:37.279 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:52.160 05:29:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:52.160 05:29:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.160 05:29:37 -- common/autotest_common.sh@10 -- # set +x 00:04:52.160 05:29:37 -- spdk/autotest.sh@78 -- # rm -f 00:04:52.160 05:29:37 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.728 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:52.728 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:52.728 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:52.728 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:52.728 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:52.728 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:52.728 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:52.728 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:52.987 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:52.987 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:52.987 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:52.987 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:52.987 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:52.987 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:52.987 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:52.987 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:52.987 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:53.245 05:29:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:53.245 05:29:40 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:53.245 05:29:40 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:53.245 05:29:40 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:53.245 05:29:40 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:53.245 05:29:40 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:53.245 05:29:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:53.245 05:29:40 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:53.245 05:29:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:53.245 05:29:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:53.245 05:29:40 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:53.245 05:29:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:53.245 05:29:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:53.245 05:29:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:53.245 05:29:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:53.245 05:29:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:53.245 05:29:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:53.245 05:29:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:53.245 05:29:40 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:53.245 No valid GPT data, bailing 00:04:53.245 05:29:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:53.245 05:29:40 -- scripts/common.sh@394 -- # pt= 00:04:53.245 05:29:40 -- scripts/common.sh@395 -- # return 1 00:04:53.245 05:29:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:53.245 1+0 records in 00:04:53.245 1+0 records out 00:04:53.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498906 s, 210 MB/s 00:04:53.245 05:29:40 -- spdk/autotest.sh@105 -- # sync 00:04:53.245 05:29:40 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:53.245 05:29:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:53.245 05:29:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:58.519 05:29:46 -- spdk/autotest.sh@111 -- # uname -s 00:04:58.519 05:29:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:58.519 05:29:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:58.519 05:29:46 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:01.810 Hugepages 00:05:01.810 node hugesize free / total 00:05:01.810 node0 1048576kB 0 / 0 00:05:01.810 node0 2048kB 0 / 0 00:05:01.810 node1 1048576kB 0 / 0 00:05:01.810 node1 2048kB 0 / 0 00:05:01.810 00:05:01.810 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.810 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:01.810 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:01.810 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:01.810 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:01.810 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:01.810 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:01.810 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:01.810 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:01.810 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:01.810 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:01.810 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:01.810 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:01.810 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:01.810 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:01.810 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:01.810 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:01.810 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:01.810 05:29:49 -- spdk/autotest.sh@117 -- # uname -s 00:05:01.810 05:29:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:01.810 05:29:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:01.810 05:29:49 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.346 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:04.346 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:05.284 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:05.284 05:29:53 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:06.220 05:29:54 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:06.221 05:29:54 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:06.221 05:29:54 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:06.221 05:29:54 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:06.221 05:29:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:06.221 05:29:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:06.221 05:29:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:06.221 05:29:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:06.221 05:29:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:06.479 05:29:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:06.479 05:29:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:06.479 05:29:54 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.016 Waiting for block devices as requested 00:05:09.275 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:09.275 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:09.275 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:09.534 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:09.534 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:09.534 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:09.794 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:09.794 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:09.794 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:09.794 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:10.052 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:10.052 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:10.052 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:10.311 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:10.311 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:10.311 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:10.570 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:10.570 05:29:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:10.570 05:29:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:10.570 05:29:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:10.570 05:29:58 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:05:10.570 05:29:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:10.570 05:29:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:10.570 05:29:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:10.570 05:29:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:10.570 05:29:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:10.570 05:29:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:10.570 05:29:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:10.570 05:29:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:10.570 05:29:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:10.570 05:29:58 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:10.570 05:29:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:10.570 05:29:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:10.570 05:29:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:10.570 05:29:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:10.570 05:29:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:10.570 05:29:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:10.570 05:29:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:10.570 05:29:58 -- common/autotest_common.sh@1543 -- # continue 00:05:10.570 05:29:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:10.570 05:29:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.570 05:29:58 -- common/autotest_common.sh@10 -- # set +x 00:05:10.571 05:29:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:10.571 05:29:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.571 05:29:58 -- common/autotest_common.sh@10 -- # set +x 00:05:10.571 05:29:58 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:13.860 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:13.860 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:14.428 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:14.428 05:30:02 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:14.428 05:30:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.428 05:30:02 -- common/autotest_common.sh@10 -- # set +x 00:05:14.428 05:30:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:14.428 05:30:02 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:14.428 05:30:02 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:14.428 05:30:02 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:14.428 05:30:02 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:14.428 05:30:02 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:14.428 05:30:02 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:14.428 05:30:02 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:14.428 05:30:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:14.428 05:30:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:14.428 05:30:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.428 05:30:02 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:14.428 05:30:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:14.687 05:30:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:14.687 05:30:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:14.687 05:30:02 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:14.687 05:30:02 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:14.687 05:30:02 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:14.687 05:30:02 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:14.687 05:30:02 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:14.687 05:30:02 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:14.687 05:30:02 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:05:14.687 05:30:02 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:05:14.687 05:30:02 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=996195 00:05:14.687 05:30:02 -- common/autotest_common.sh@1585 -- # waitforlisten 996195 00:05:14.687 05:30:02 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.687 05:30:02 -- common/autotest_common.sh@835 -- # '[' -z 996195 ']' 00:05:14.687 05:30:02 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.687 05:30:02 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.687 05:30:02 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.687 05:30:02 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.687 05:30:02 -- common/autotest_common.sh@10 -- # set +x 00:05:14.687 [2024-12-10 05:30:02.436822] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:05:14.688 [2024-12-10 05:30:02.436872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996195 ] 00:05:14.688 [2024-12-10 05:30:02.509984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.688 [2024-12-10 05:30:02.551017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.946 05:30:02 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.946 05:30:02 -- common/autotest_common.sh@868 -- # return 0 00:05:14.946 05:30:02 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:14.946 05:30:02 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:14.946 05:30:02 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:18.231 nvme0n1 00:05:18.231 05:30:05 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:18.231 [2024-12-10 05:30:05.939799] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:18.231 [2024-12-10 05:30:05.939827] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:18.231 request: 00:05:18.231 { 00:05:18.231 "nvme_ctrlr_name": "nvme0", 00:05:18.231 "password": "test", 00:05:18.231 "method": "bdev_nvme_opal_revert", 00:05:18.231 "req_id": 1 00:05:18.231 } 00:05:18.231 Got JSON-RPC error response 00:05:18.231 response: 00:05:18.231 { 00:05:18.231 "code": -32603, 00:05:18.231 "message": "Internal error" 00:05:18.231 } 00:05:18.231 05:30:05 -- common/autotest_common.sh@1591 -- # true 00:05:18.231 05:30:05 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:18.231 05:30:05 -- common/autotest_common.sh@1595 -- # killprocess 996195 00:05:18.231 05:30:05 -- common/autotest_common.sh@954 -- # '[' -z 996195 ']' 00:05:18.231 05:30:05 -- common/autotest_common.sh@958 -- # kill -0 996195 00:05:18.231 05:30:05 -- common/autotest_common.sh@959 -- # uname 00:05:18.231 05:30:05 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.231 05:30:05 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 996195 00:05:18.231 05:30:05 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.231 05:30:05 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.231 05:30:05 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 996195' 00:05:18.231 killing process with pid 996195 00:05:18.231 05:30:06 -- common/autotest_common.sh@973 -- # kill 996195 00:05:18.231 05:30:06 -- common/autotest_common.sh@978 -- # wait 996195 00:05:20.133 05:30:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:20.133 05:30:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:20.133 05:30:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:20.133 05:30:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:20.133 05:30:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:20.133 05:30:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.133 05:30:07 -- common/autotest_common.sh@10 -- # set +x 00:05:20.133 05:30:07 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:20.133 05:30:07 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:20.133 05:30:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.133 05:30:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.133 05:30:07 -- common/autotest_common.sh@10 -- # set +x 00:05:20.133 ************************************ 00:05:20.133 START TEST env 00:05:20.133 ************************************ 00:05:20.133 05:30:07 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:20.134 * Looking for test storage... 00:05:20.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:20.134 05:30:07 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.134 05:30:07 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.134 05:30:07 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.134 05:30:07 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.134 05:30:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.134 05:30:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.134 05:30:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.134 05:30:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.134 05:30:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.134 05:30:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.134 05:30:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.134 05:30:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.134 05:30:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.134 05:30:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.134 05:30:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.134 05:30:07 env -- scripts/common.sh@344 -- # case "$op" in 00:05:20.134 05:30:07 env -- scripts/common.sh@345 -- # : 1 00:05:20.134 05:30:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.134 05:30:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.134 05:30:07 env -- scripts/common.sh@365 -- # decimal 1 00:05:20.134 05:30:07 env -- scripts/common.sh@353 -- # local d=1 00:05:20.134 05:30:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.134 05:30:07 env -- scripts/common.sh@355 -- # echo 1 00:05:20.134 05:30:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.134 05:30:07 env -- scripts/common.sh@366 -- # decimal 2 00:05:20.134 05:30:07 env -- scripts/common.sh@353 -- # local d=2 00:05:20.134 05:30:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.134 05:30:07 env -- scripts/common.sh@355 -- # echo 2 00:05:20.134 05:30:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.134 05:30:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.134 05:30:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.134 05:30:07 env -- scripts/common.sh@368 -- # return 0 00:05:20.134 05:30:07 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.134 05:30:07 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.134 --rc genhtml_branch_coverage=1 00:05:20.134 --rc genhtml_function_coverage=1 00:05:20.134 --rc genhtml_legend=1 00:05:20.134 --rc geninfo_all_blocks=1 00:05:20.134 --rc geninfo_unexecuted_blocks=1 00:05:20.134 00:05:20.134 ' 00:05:20.134 05:30:07 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.134 --rc genhtml_branch_coverage=1 00:05:20.134 --rc genhtml_function_coverage=1 00:05:20.134 --rc genhtml_legend=1 00:05:20.134 --rc geninfo_all_blocks=1 00:05:20.134 --rc geninfo_unexecuted_blocks=1 00:05:20.134 00:05:20.134 ' 00:05:20.134 05:30:07 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.134 --rc genhtml_branch_coverage=1 00:05:20.134 --rc genhtml_function_coverage=1 00:05:20.134 --rc genhtml_legend=1 00:05:20.134 --rc geninfo_all_blocks=1 00:05:20.134 --rc geninfo_unexecuted_blocks=1 00:05:20.134 00:05:20.134 ' 00:05:20.134 05:30:07 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.134 --rc genhtml_branch_coverage=1 00:05:20.134 --rc genhtml_function_coverage=1 00:05:20.134 --rc genhtml_legend=1 00:05:20.134 --rc geninfo_all_blocks=1 00:05:20.134 --rc geninfo_unexecuted_blocks=1 00:05:20.134 00:05:20.134 ' 00:05:20.134 05:30:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:20.134 05:30:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.134 05:30:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.134 05:30:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.134 ************************************ 00:05:20.134 START TEST env_memory 00:05:20.134 ************************************ 00:05:20.134 05:30:07 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:20.134 00:05:20.134 00:05:20.134 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.134 http://cunit.sourceforge.net/ 00:05:20.134 00:05:20.134 00:05:20.134 Suite: memory 00:05:20.134 Test: alloc and free memory map ...[2024-12-10 05:30:07.918384] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:20.134 passed 00:05:20.134 Test: mem map translation ...[2024-12-10 05:30:07.936230] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:20.134 [2024-12-10 05:30:07.936255] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:20.134 [2024-12-10 05:30:07.936287] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:20.134 [2024-12-10 05:30:07.936293] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:20.134 passed 00:05:20.134 Test: mem map registration ...[2024-12-10 05:30:07.972569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:20.134 [2024-12-10 05:30:07.972583] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:20.134 passed 00:05:20.134 Test: mem map adjacent registrations ...passed 00:05:20.134 00:05:20.134 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.134 suites 1 1 n/a 0 0 00:05:20.134 tests 4 4 4 0 0 00:05:20.134 asserts 152 152 152 0 n/a 00:05:20.134 00:05:20.134 Elapsed time = 0.127 seconds 00:05:20.134 00:05:20.134 real 0m0.135s 00:05:20.134 user 0m0.127s 00:05:20.134 sys 0m0.007s 00:05:20.134 05:30:08 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.134 05:30:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:20.134 ************************************ 00:05:20.134 END TEST env_memory 00:05:20.134 ************************************ 00:05:20.393 05:30:08 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:20.393 05:30:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.393 05:30:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.393 05:30:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.393 ************************************ 00:05:20.393 START TEST env_vtophys 00:05:20.393 ************************************ 00:05:20.393 05:30:08 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:20.393 EAL: lib.eal log level changed from notice to debug 00:05:20.393 EAL: Detected lcore 0 as core 0 on socket 0 00:05:20.393 EAL: Detected lcore 1 as core 1 on socket 0 00:05:20.393 EAL: Detected lcore 2 as core 2 on socket 0 00:05:20.393 EAL: Detected lcore 3 as core 3 on socket 0 00:05:20.393 EAL: Detected lcore 4 as core 4 on socket 0 00:05:20.393 EAL: Detected lcore 5 as core 5 on socket 0 00:05:20.393 EAL: Detected lcore 6 as core 6 on socket 0 00:05:20.393 EAL: Detected lcore 7 as core 8 on socket 0 00:05:20.393 EAL: Detected lcore 8 as core 9 on socket 0 00:05:20.394 EAL: Detected lcore 9 as core 10 on socket 0 00:05:20.394 EAL: Detected lcore 10 as core 11 on socket 0 00:05:20.394 EAL: Detected lcore 11 as core 12 on socket 0 00:05:20.394 EAL: Detected lcore 12 as core 13 on socket 0 00:05:20.394 EAL: Detected lcore 13 as core 16 on socket 0 00:05:20.394 EAL: Detected lcore 14 as core 17 on socket 0 00:05:20.394 EAL: Detected lcore 15 as core 18 on socket 0 00:05:20.394 EAL: Detected lcore 16 as core 19 on socket 0 00:05:20.394 EAL: Detected lcore 17 as core 20 on socket 0 00:05:20.394 EAL: Detected lcore 18 as core 21 on socket 0 00:05:20.394 EAL: Detected lcore 19 as core 25 on socket 0 00:05:20.394 EAL: Detected lcore 20 as core 26 on socket 0 00:05:20.394 EAL: Detected lcore 21 as core 27 on socket 0 00:05:20.394 EAL: Detected lcore 22 as core 28 on socket 0 00:05:20.394 EAL: Detected lcore 23 as core 29 on socket 0 00:05:20.394 EAL: Detected lcore 24 as core 0 on socket 1 00:05:20.394 EAL: Detected lcore 25 as core 1 on socket 1 00:05:20.394 EAL: Detected lcore 26 as core 2 on socket 1 00:05:20.394 EAL: Detected lcore 27 as core 3 on socket 1 00:05:20.394 EAL: Detected lcore 28 as core 4 on socket 1 00:05:20.394 EAL: Detected lcore 29 as core 5 on socket 1 00:05:20.394 EAL: Detected lcore 30 as core 6 on socket 1 00:05:20.394 EAL: Detected lcore 31 as core 8 on socket 1 00:05:20.394 EAL: Detected lcore 32 as core 9 on socket 1 00:05:20.394 EAL: Detected lcore 33 as core 10 on socket 1 00:05:20.394 EAL: Detected lcore 34 as core 11 on socket 1 00:05:20.394 EAL: Detected lcore 35 as core 12 on socket 1 00:05:20.394 EAL: Detected lcore 36 as core 13 on socket 1 00:05:20.394 EAL: Detected lcore 37 as core 16 on socket 1 00:05:20.394 EAL: Detected lcore 38 as core 17 on socket 1 00:05:20.394 EAL: Detected lcore 39 as core 18 on socket 1 00:05:20.394 EAL: Detected lcore 40 as core 19 on socket 1 00:05:20.394 EAL: Detected lcore 41 as core 20 on socket 1 00:05:20.394 EAL: Detected lcore 42 as core 21 on socket 1 00:05:20.394 EAL: Detected lcore 43 as core 25 on socket 1 00:05:20.394 EAL: Detected lcore 44 as core 26 on socket 1 00:05:20.394 EAL: Detected lcore 45 as core 27 on socket 1 00:05:20.394 EAL: Detected lcore 46 as core 28 on socket 1 00:05:20.394 EAL: Detected lcore 47 as core 29 on socket 1 00:05:20.394 EAL: Detected lcore 48 as core 0 on socket 0 00:05:20.394 EAL: Detected lcore 49 as core 1 on socket 0 00:05:20.394 EAL: Detected lcore 50 as core 2 on socket 0 00:05:20.394 EAL: Detected lcore 51 as core 3 on socket 0 00:05:20.394 EAL: Detected lcore 52 as core 4 on socket 0 00:05:20.394 EAL: Detected lcore 53 as core 5 on socket 0 00:05:20.394 EAL: Detected lcore 54 as core 6 on socket 0 00:05:20.394 EAL: Detected lcore 55 as core 8 on socket 0 00:05:20.394 EAL: Detected lcore 56 as core 9 on socket 0 00:05:20.394 EAL: Detected lcore 57 as core 10 on socket 0 00:05:20.394 EAL: Detected lcore 58 as core 11 on socket 0 00:05:20.394 EAL: Detected lcore 59 as core 12 on socket 0 00:05:20.394 EAL: Detected lcore 60 as core 13 on socket 0 00:05:20.394 EAL: Detected lcore 61 as core 16 on socket 0 00:05:20.394 EAL: Detected lcore 62 as core 17 on socket 0 00:05:20.394 EAL: Detected lcore 63 as core 18 on socket 0 00:05:20.394 EAL: Detected lcore 64 as core 19 on socket 0 00:05:20.394 EAL: Detected lcore 65 as core 20 on socket 0 00:05:20.394 EAL: Detected lcore 66 as core 21 on socket 0 00:05:20.394 EAL: Detected lcore 67 as core 25 on socket 0 00:05:20.394 EAL: Detected lcore 68 as core 26 on socket 0 00:05:20.394 EAL: Detected lcore 69 as core 27 on socket 0 00:05:20.394 EAL: Detected lcore 70 as core 28 on socket 0 00:05:20.394 EAL: Detected lcore 71 as core 29 on socket 0 00:05:20.394 EAL: Detected lcore 72 as core 0 on socket 1 00:05:20.394 EAL: Detected lcore 73 as core 1 on socket 1 00:05:20.394 EAL: Detected lcore 74 as core 2 on socket 1 00:05:20.394 EAL: Detected lcore 75 as core 3 on socket 1 00:05:20.394 EAL: Detected lcore 76 as core 4 on socket 1 00:05:20.394 EAL: Detected lcore 77 as core 5 on socket 1 00:05:20.394 EAL: Detected lcore 78 as core 6 on socket 1 00:05:20.394 EAL: Detected lcore 79 as core 8 on socket 1 00:05:20.394 EAL: Detected lcore 80 as core 9 on socket 1 00:05:20.394 EAL: Detected lcore 81 as core 10 on socket 1 00:05:20.394 EAL: Detected lcore 82 as core 11 on socket 1 00:05:20.394 EAL: Detected lcore 83 as core 12 on socket 1 00:05:20.394 EAL: Detected lcore 84 as core 13 on socket 1 00:05:20.394 EAL: Detected lcore 85 as core 16 on socket 1 00:05:20.394 EAL: Detected lcore 86 as core 17 on socket 1 00:05:20.394 EAL: Detected lcore 87 as core 18 on socket 1 00:05:20.394 EAL: Detected lcore 88 as core 19 on socket 1 00:05:20.394 EAL: Detected lcore 89 as core 20 on socket 1 00:05:20.394 EAL: Detected lcore 90 as core 21 on socket 1 00:05:20.394 EAL: Detected lcore 91 as core 25 on socket 1 00:05:20.394 EAL: Detected lcore 92 as core 26 on socket 1 00:05:20.394 EAL: Detected lcore 93 as core 27 on socket 1 00:05:20.394 EAL: Detected lcore 94 as core 28 on socket 1 00:05:20.394 EAL: Detected lcore 95 as core 29 on socket 1 00:05:20.394 EAL: Maximum logical cores by configuration: 128 00:05:20.394 EAL: Detected CPU lcores: 96 00:05:20.394 EAL: Detected NUMA nodes: 2 00:05:20.394 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:20.394 EAL: Detected shared linkage of DPDK 00:05:20.394 EAL: No shared files mode enabled, IPC will be disabled 00:05:20.394 EAL: Bus pci wants IOVA as 'DC' 00:05:20.394 EAL: Buses did not request a specific IOVA mode. 00:05:20.394 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:20.394 EAL: Selected IOVA mode 'VA' 00:05:20.394 EAL: Probing VFIO support... 00:05:20.394 EAL: IOMMU type 1 (Type 1) is supported 00:05:20.394 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:20.394 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:20.394 EAL: VFIO support initialized 00:05:20.394 EAL: Ask a virtual area of 0x2e000 bytes 00:05:20.394 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:20.394 EAL: Setting up physically contiguous memory... 00:05:20.394 EAL: Setting maximum number of open files to 524288 00:05:20.394 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:20.394 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:20.394 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:20.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.394 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:20.394 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.394 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:20.394 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:20.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.394 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:20.394 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.394 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:20.394 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:20.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.394 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:20.394 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.394 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:20.394 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:20.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.394 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:20.394 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.394 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:20.394 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:20.394 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:20.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.394 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:20.394 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.394 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:20.394 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:20.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.394 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:20.394 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.394 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:20.394 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:20.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.394 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:20.394 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.394 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:20.394 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:20.394 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.394 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:20.394 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.394 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.394 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:20.394 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:20.394 EAL: Hugepages will be freed exactly as allocated. 00:05:20.394 EAL: No shared files mode enabled, IPC is disabled 00:05:20.394 EAL: No shared files mode enabled, IPC is disabled 00:05:20.394 EAL: TSC frequency is ~2100000 KHz 00:05:20.394 EAL: Main lcore 0 is ready (tid=7f587567fa00;cpuset=[0]) 00:05:20.394 EAL: Trying to obtain current memory policy. 00:05:20.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.394 EAL: Restoring previous memory policy: 0 00:05:20.394 EAL: request: mp_malloc_sync 00:05:20.394 EAL: No shared files mode enabled, IPC is disabled 00:05:20.394 EAL: Heap on socket 0 was expanded by 2MB 00:05:20.394 EAL: No shared files mode enabled, IPC is disabled 00:05:20.394 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:20.394 EAL: Mem event callback 'spdk:(nil)' registered 00:05:20.394 00:05:20.394 00:05:20.394 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.394 http://cunit.sourceforge.net/ 00:05:20.394 00:05:20.394 00:05:20.394 Suite: components_suite 00:05:20.394 Test: vtophys_malloc_test ...passed 00:05:20.394 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:20.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.394 EAL: Restoring previous memory policy: 4 00:05:20.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.394 EAL: request: mp_malloc_sync 00:05:20.394 EAL: No shared files mode enabled, IPC is disabled 00:05:20.394 EAL: Heap on socket 0 was expanded by 4MB 00:05:20.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.394 EAL: request: mp_malloc_sync 00:05:20.394 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was shrunk by 4MB 00:05:20.395 EAL: Trying to obtain current memory policy. 00:05:20.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.395 EAL: Restoring previous memory policy: 4 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.395 EAL: request: mp_malloc_sync 00:05:20.395 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was expanded by 6MB 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.395 EAL: request: mp_malloc_sync 00:05:20.395 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was shrunk by 6MB 00:05:20.395 EAL: Trying to obtain current memory policy. 00:05:20.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.395 EAL: Restoring previous memory policy: 4 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.395 EAL: request: mp_malloc_sync 00:05:20.395 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was expanded by 10MB 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.395 EAL: request: mp_malloc_sync 00:05:20.395 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was shrunk by 10MB 00:05:20.395 EAL: Trying to obtain current memory policy. 00:05:20.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.395 EAL: Restoring previous memory policy: 4 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.395 EAL: request: mp_malloc_sync 00:05:20.395 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was expanded by 18MB 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.395 EAL: request: mp_malloc_sync 00:05:20.395 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was shrunk by 18MB 00:05:20.395 EAL: Trying to obtain current memory policy. 00:05:20.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.395 EAL: Restoring previous memory policy: 4 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.395 EAL: request: mp_malloc_sync 00:05:20.395 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was expanded by 34MB 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.395 EAL: request: mp_malloc_sync 00:05:20.395 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.395 EAL: Trying to obtain current memory policy. 00:05:20.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.395 EAL: Restoring previous memory policy: 4 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.395 EAL: request: mp_malloc_sync 00:05:20.395 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.395 EAL: request: mp_malloc_sync 00:05:20.395 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.395 EAL: Trying to obtain current memory policy. 00:05:20.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.395 EAL: Restoring previous memory policy: 4 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.395 EAL: request: mp_malloc_sync 00:05:20.395 EAL: No shared files mode enabled, IPC is disabled 00:05:20.395 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.654 EAL: request: mp_malloc_sync 00:05:20.654 EAL: No shared files mode enabled, IPC is disabled 00:05:20.654 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.654 EAL: Trying to obtain current memory policy. 00:05:20.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.654 EAL: Restoring previous memory policy: 4 00:05:20.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.654 EAL: request: mp_malloc_sync 00:05:20.654 EAL: No shared files mode enabled, IPC is disabled 00:05:20.654 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.654 EAL: request: mp_malloc_sync 00:05:20.654 EAL: No shared files mode enabled, IPC is disabled 00:05:20.654 EAL: Heap on socket 0 was shrunk by 258MB 00:05:20.654 EAL: Trying to obtain current memory policy. 00:05:20.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.654 EAL: Restoring previous memory policy: 4 00:05:20.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.654 EAL: request: mp_malloc_sync 00:05:20.654 EAL: No shared files mode enabled, IPC is disabled 00:05:20.654 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.912 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.912 EAL: request: mp_malloc_sync 00:05:20.912 EAL: No shared files mode enabled, IPC is disabled 00:05:20.912 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.912 EAL: Trying to obtain current memory policy. 00:05:20.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.171 EAL: Restoring previous memory policy: 4 00:05:21.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.171 EAL: request: mp_malloc_sync 00:05:21.171 EAL: No shared files mode enabled, IPC is disabled 00:05:21.171 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.429 EAL: request: mp_malloc_sync 00:05:21.429 EAL: No shared files mode enabled, IPC is disabled 00:05:21.429 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:21.429 passed 00:05:21.429 00:05:21.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.429 suites 1 1 n/a 0 0 00:05:21.429 tests 2 2 2 0 0 00:05:21.429 asserts 497 497 497 0 n/a 00:05:21.429 00:05:21.429 Elapsed time = 0.967 seconds 00:05:21.429 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.429 EAL: request: mp_malloc_sync 00:05:21.429 EAL: No shared files mode enabled, IPC is disabled 00:05:21.429 EAL: Heap on socket 0 was shrunk by 2MB 00:05:21.429 EAL: No shared files mode enabled, IPC is disabled 00:05:21.429 EAL: No shared files mode enabled, IPC is disabled 00:05:21.429 EAL: No shared files mode enabled, IPC is disabled 00:05:21.429 00:05:21.429 real 0m1.093s 00:05:21.429 user 0m0.644s 00:05:21.429 sys 0m0.422s 00:05:21.429 05:30:09 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.429 05:30:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:21.429 ************************************ 00:05:21.429 END TEST env_vtophys 00:05:21.429 ************************************ 00:05:21.429 05:30:09 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.429 05:30:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.429 05:30:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.429 05:30:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.429 ************************************ 00:05:21.429 START TEST env_pci 00:05:21.429 ************************************ 00:05:21.429 05:30:09 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.429 00:05:21.429 00:05:21.429 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.429 http://cunit.sourceforge.net/ 00:05:21.429 00:05:21.429 00:05:21.429 Suite: pci 00:05:21.429 Test: pci_hook ...[2024-12-10 05:30:09.267776] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 997854 has claimed it 00:05:21.429 EAL: Cannot find device (10000:00:01.0) 00:05:21.429 EAL: Failed to attach device on primary process 00:05:21.429 passed 00:05:21.429 00:05:21.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.429 suites 1 1 n/a 0 0 00:05:21.429 tests 1 1 1 0 0 00:05:21.429 asserts 25 25 25 0 n/a 00:05:21.429 00:05:21.429 Elapsed time = 0.026 seconds 00:05:21.429 00:05:21.429 real 0m0.045s 00:05:21.429 user 0m0.018s 00:05:21.429 sys 0m0.027s 00:05:21.430 05:30:09 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.430 05:30:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:21.430 ************************************ 00:05:21.430 END TEST env_pci 00:05:21.430 ************************************ 00:05:21.688 05:30:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:21.688 05:30:09 env -- env/env.sh@15 -- # uname 00:05:21.688 05:30:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:21.688 05:30:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:21.688 05:30:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.688 05:30:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:21.688 05:30:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.688 05:30:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.688 ************************************ 00:05:21.688 START TEST env_dpdk_post_init 00:05:21.688 ************************************ 00:05:21.688 05:30:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.688 EAL: Detected CPU lcores: 96 00:05:21.688 EAL: Detected NUMA nodes: 2 00:05:21.688 EAL: Detected shared linkage of DPDK 00:05:21.688 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.688 EAL: Selected IOVA mode 'VA' 00:05:21.688 EAL: VFIO support initialized 00:05:21.688 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.688 EAL: Using IOMMU type 1 (Type 1) 00:05:21.688 EAL: Ignore mapping IO port bar(1) 00:05:21.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:21.688 EAL: Ignore mapping IO port bar(1) 00:05:21.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:21.688 EAL: Ignore mapping IO port bar(1) 00:05:21.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:21.688 EAL: Ignore mapping IO port bar(1) 00:05:21.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:21.688 EAL: Ignore mapping IO port bar(1) 00:05:21.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:21.688 EAL: Ignore mapping IO port bar(1) 00:05:21.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:21.947 EAL: Ignore mapping IO port bar(1) 00:05:21.947 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:21.947 EAL: Ignore mapping IO port bar(1) 00:05:21.947 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:22.514 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:22.514 EAL: Ignore mapping IO port bar(1) 00:05:22.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:22.514 EAL: Ignore mapping IO port bar(1) 00:05:22.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:22.514 EAL: Ignore mapping IO port bar(1) 00:05:22.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:22.514 EAL: Ignore mapping IO port bar(1) 00:05:22.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:22.514 EAL: Ignore mapping IO port bar(1) 00:05:22.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:22.514 EAL: Ignore mapping IO port bar(1) 00:05:22.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:22.772 EAL: Ignore mapping IO port bar(1) 00:05:22.772 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:22.772 EAL: Ignore mapping IO port bar(1) 00:05:22.772 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:26.060 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:26.060 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:26.060 Starting DPDK initialization... 00:05:26.060 Starting SPDK post initialization... 00:05:26.060 SPDK NVMe probe 00:05:26.060 Attaching to 0000:5e:00.0 00:05:26.060 Attached to 0000:5e:00.0 00:05:26.060 Cleaning up... 00:05:26.060 00:05:26.060 real 0m4.313s 00:05:26.060 user 0m2.944s 00:05:26.060 sys 0m0.444s 00:05:26.060 05:30:13 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.060 05:30:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.060 ************************************ 00:05:26.060 END TEST env_dpdk_post_init 00:05:26.060 ************************************ 00:05:26.060 05:30:13 env -- env/env.sh@26 -- # uname 00:05:26.060 05:30:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:26.060 05:30:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:26.060 05:30:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.060 05:30:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.060 05:30:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.060 ************************************ 00:05:26.060 START TEST env_mem_callbacks 00:05:26.060 ************************************ 00:05:26.060 05:30:13 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:26.060 EAL: Detected CPU lcores: 96 00:05:26.060 EAL: Detected NUMA nodes: 2 00:05:26.060 EAL: Detected shared linkage of DPDK 00:05:26.060 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:26.060 EAL: Selected IOVA mode 'VA' 00:05:26.060 EAL: VFIO support initialized 00:05:26.060 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:26.060 00:05:26.060 00:05:26.060 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.060 http://cunit.sourceforge.net/ 00:05:26.060 00:05:26.060 00:05:26.060 Suite: memory 00:05:26.060 Test: test ... 00:05:26.060 register 0x200000200000 2097152 00:05:26.060 malloc 3145728 00:05:26.060 register 0x200000400000 4194304 00:05:26.060 buf 0x200000500000 len 3145728 PASSED 00:05:26.060 malloc 64 00:05:26.060 buf 0x2000004fff40 len 64 PASSED 00:05:26.060 malloc 4194304 00:05:26.060 register 0x200000800000 6291456 00:05:26.060 buf 0x200000a00000 len 4194304 PASSED 00:05:26.060 free 0x200000500000 3145728 00:05:26.060 free 0x2000004fff40 64 00:05:26.060 unregister 0x200000400000 4194304 PASSED 00:05:26.060 free 0x200000a00000 4194304 00:05:26.060 unregister 0x200000800000 6291456 PASSED 00:05:26.060 malloc 8388608 00:05:26.060 register 0x200000400000 10485760 00:05:26.060 buf 0x200000600000 len 8388608 PASSED 00:05:26.060 free 0x200000600000 8388608 00:05:26.060 unregister 0x200000400000 10485760 PASSED 00:05:26.060 passed 00:05:26.060 00:05:26.060 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.060 suites 1 1 n/a 0 0 00:05:26.060 tests 1 1 1 0 0 00:05:26.060 asserts 15 15 15 0 n/a 00:05:26.060 00:05:26.060 Elapsed time = 0.008 seconds 00:05:26.060 00:05:26.060 real 0m0.057s 00:05:26.060 user 0m0.022s 00:05:26.060 sys 0m0.034s 00:05:26.060 05:30:13 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.060 05:30:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:26.060 ************************************ 00:05:26.060 END TEST env_mem_callbacks 00:05:26.060 ************************************ 00:05:26.060 00:05:26.060 real 0m6.169s 00:05:26.060 user 0m3.992s 00:05:26.060 sys 0m1.263s 00:05:26.060 05:30:13 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.060 05:30:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.060 ************************************ 00:05:26.060 END TEST env 00:05:26.060 ************************************ 00:05:26.060 05:30:13 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:26.060 05:30:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.060 05:30:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.060 05:30:13 -- common/autotest_common.sh@10 -- # set +x 00:05:26.060 ************************************ 00:05:26.060 START TEST rpc 00:05:26.060 ************************************ 00:05:26.060 05:30:13 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:26.319 * Looking for test storage... 00:05:26.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.319 05:30:14 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.319 05:30:14 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.319 05:30:14 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.319 05:30:14 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.319 05:30:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.319 05:30:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.319 05:30:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.319 05:30:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.319 05:30:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.319 05:30:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.319 05:30:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.319 05:30:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.319 05:30:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.319 05:30:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.319 05:30:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.319 05:30:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:26.319 05:30:14 rpc -- scripts/common.sh@345 -- # : 1 00:05:26.319 05:30:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.319 05:30:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.319 05:30:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:26.319 05:30:14 rpc -- scripts/common.sh@353 -- # local d=1 00:05:26.319 05:30:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.319 05:30:14 rpc -- scripts/common.sh@355 -- # echo 1 00:05:26.319 05:30:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.319 05:30:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:26.319 05:30:14 rpc -- scripts/common.sh@353 -- # local d=2 00:05:26.319 05:30:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.319 05:30:14 rpc -- scripts/common.sh@355 -- # echo 2 00:05:26.319 05:30:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.319 05:30:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.319 05:30:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.319 05:30:14 rpc -- scripts/common.sh@368 -- # return 0 00:05:26.320 05:30:14 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.320 05:30:14 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.320 --rc genhtml_branch_coverage=1 00:05:26.320 --rc genhtml_function_coverage=1 00:05:26.320 --rc genhtml_legend=1 00:05:26.320 --rc geninfo_all_blocks=1 00:05:26.320 --rc geninfo_unexecuted_blocks=1 00:05:26.320 00:05:26.320 ' 00:05:26.320 05:30:14 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.320 --rc genhtml_branch_coverage=1 00:05:26.320 --rc genhtml_function_coverage=1 00:05:26.320 --rc genhtml_legend=1 00:05:26.320 --rc geninfo_all_blocks=1 00:05:26.320 --rc geninfo_unexecuted_blocks=1 00:05:26.320 00:05:26.320 ' 00:05:26.320 05:30:14 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.320 --rc genhtml_branch_coverage=1 00:05:26.320 --rc genhtml_function_coverage=1 00:05:26.320 --rc genhtml_legend=1 00:05:26.320 --rc geninfo_all_blocks=1 00:05:26.320 --rc geninfo_unexecuted_blocks=1 00:05:26.320 00:05:26.320 ' 00:05:26.320 05:30:14 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.320 --rc genhtml_branch_coverage=1 00:05:26.320 --rc genhtml_function_coverage=1 00:05:26.320 --rc genhtml_legend=1 00:05:26.320 --rc geninfo_all_blocks=1 00:05:26.320 --rc geninfo_unexecuted_blocks=1 00:05:26.320 00:05:26.320 ' 00:05:26.320 05:30:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=998666 00:05:26.320 05:30:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.320 05:30:14 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:26.320 05:30:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 998666 00:05:26.320 05:30:14 rpc -- common/autotest_common.sh@835 -- # '[' -z 998666 ']' 00:05:26.320 05:30:14 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.320 05:30:14 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.320 05:30:14 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.320 05:30:14 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.320 05:30:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.320 [2024-12-10 05:30:14.156151] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:05:26.320 [2024-12-10 05:30:14.156199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998666 ] 00:05:26.579 [2024-12-10 05:30:14.230544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.579 [2024-12-10 05:30:14.270423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:26.579 [2024-12-10 05:30:14.270459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 998666' to capture a snapshot of events at runtime. 00:05:26.579 [2024-12-10 05:30:14.270466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:26.579 [2024-12-10 05:30:14.270472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:26.579 [2024-12-10 05:30:14.270478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid998666 for offline analysis/debug. 00:05:26.579 [2024-12-10 05:30:14.270955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.838 05:30:14 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.838 05:30:14 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:26.838 05:30:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.838 05:30:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.838 05:30:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.838 05:30:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.838 05:30:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.838 05:30:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.838 05:30:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.838 ************************************ 00:05:26.838 START TEST rpc_integrity 00:05:26.838 ************************************ 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.838 { 00:05:26.838 "name": "Malloc0", 00:05:26.838 "aliases": [ 00:05:26.838 "e791720f-1acb-4888-9d7e-91c85e725bbd" 00:05:26.838 ], 00:05:26.838 "product_name": "Malloc disk", 00:05:26.838 "block_size": 512, 00:05:26.838 "num_blocks": 16384, 00:05:26.838 "uuid": "e791720f-1acb-4888-9d7e-91c85e725bbd", 00:05:26.838 "assigned_rate_limits": { 00:05:26.838 "rw_ios_per_sec": 0, 00:05:26.838 "rw_mbytes_per_sec": 0, 00:05:26.838 "r_mbytes_per_sec": 0, 00:05:26.838 "w_mbytes_per_sec": 0 00:05:26.838 }, 00:05:26.838 "claimed": false, 00:05:26.838 "zoned": false, 00:05:26.838 "supported_io_types": { 00:05:26.838 "read": true, 00:05:26.838 "write": true, 00:05:26.838 "unmap": true, 00:05:26.838 "flush": true, 00:05:26.838 "reset": true, 00:05:26.838 "nvme_admin": false, 00:05:26.838 "nvme_io": false, 00:05:26.838 "nvme_io_md": false, 00:05:26.838 "write_zeroes": true, 00:05:26.838 "zcopy": true, 00:05:26.838 "get_zone_info": false, 00:05:26.838 "zone_management": false, 00:05:26.838 "zone_append": false, 00:05:26.838 "compare": false, 00:05:26.838 "compare_and_write": false, 00:05:26.838 "abort": true, 00:05:26.838 "seek_hole": false, 00:05:26.838 "seek_data": false, 00:05:26.838 "copy": true, 00:05:26.838 "nvme_iov_md": false 00:05:26.838 }, 00:05:26.838 "memory_domains": [ 00:05:26.838 { 00:05:26.838 "dma_device_id": "system", 00:05:26.838 "dma_device_type": 1 00:05:26.838 }, 00:05:26.838 { 00:05:26.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.838 "dma_device_type": 2 00:05:26.838 } 00:05:26.838 ], 00:05:26.838 "driver_specific": {} 00:05:26.838 } 00:05:26.838 ]' 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.838 [2024-12-10 05:30:14.656138] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.838 [2024-12-10 05:30:14.656172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.838 [2024-12-10 05:30:14.656184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8cd740 00:05:26.838 [2024-12-10 05:30:14.656191] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.838 [2024-12-10 05:30:14.657258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.838 [2024-12-10 05:30:14.657279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.838 Passthru0 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.838 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.838 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.838 { 00:05:26.838 "name": "Malloc0", 00:05:26.838 "aliases": [ 00:05:26.838 "e791720f-1acb-4888-9d7e-91c85e725bbd" 00:05:26.838 ], 00:05:26.838 "product_name": "Malloc disk", 00:05:26.838 "block_size": 512, 00:05:26.838 "num_blocks": 16384, 00:05:26.838 "uuid": "e791720f-1acb-4888-9d7e-91c85e725bbd", 00:05:26.838 "assigned_rate_limits": { 00:05:26.838 "rw_ios_per_sec": 0, 00:05:26.838 "rw_mbytes_per_sec": 0, 00:05:26.838 "r_mbytes_per_sec": 0, 00:05:26.838 "w_mbytes_per_sec": 0 00:05:26.838 }, 00:05:26.838 "claimed": true, 00:05:26.838 "claim_type": "exclusive_write", 00:05:26.838 "zoned": false, 00:05:26.838 "supported_io_types": { 00:05:26.838 "read": true, 00:05:26.838 "write": true, 00:05:26.838 "unmap": true, 00:05:26.838 "flush": true, 00:05:26.838 "reset": true, 00:05:26.838 "nvme_admin": false, 00:05:26.838 "nvme_io": false, 00:05:26.838 "nvme_io_md": false, 00:05:26.838 "write_zeroes": true, 00:05:26.838 "zcopy": true, 00:05:26.838 "get_zone_info": false, 00:05:26.838 "zone_management": false, 00:05:26.838 "zone_append": false, 00:05:26.838 "compare": false, 00:05:26.838 "compare_and_write": false, 00:05:26.838 "abort": true, 00:05:26.838 "seek_hole": false, 00:05:26.838 "seek_data": false, 00:05:26.838 "copy": true, 00:05:26.838 "nvme_iov_md": false 00:05:26.838 }, 00:05:26.838 "memory_domains": [ 00:05:26.838 { 00:05:26.838 "dma_device_id": "system", 00:05:26.838 "dma_device_type": 1 00:05:26.838 }, 00:05:26.838 { 00:05:26.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.838 "dma_device_type": 2 00:05:26.838 } 00:05:26.838 ], 00:05:26.838 "driver_specific": {} 00:05:26.838 }, 00:05:26.838 { 00:05:26.838 "name": "Passthru0", 00:05:26.838 "aliases": [ 00:05:26.838 "5f9e0f54-adf1-520c-bd2f-fcba885d8bdf" 00:05:26.838 ], 00:05:26.838 "product_name": "passthru", 00:05:26.838 "block_size": 512, 00:05:26.838 "num_blocks": 16384, 00:05:26.838 "uuid": "5f9e0f54-adf1-520c-bd2f-fcba885d8bdf", 00:05:26.838 "assigned_rate_limits": { 00:05:26.838 "rw_ios_per_sec": 0, 00:05:26.838 "rw_mbytes_per_sec": 0, 00:05:26.838 "r_mbytes_per_sec": 0, 00:05:26.838 "w_mbytes_per_sec": 0 00:05:26.838 }, 00:05:26.838 "claimed": false, 00:05:26.838 "zoned": false, 00:05:26.838 "supported_io_types": { 00:05:26.838 "read": true, 00:05:26.838 "write": true, 00:05:26.838 "unmap": true, 00:05:26.838 "flush": true, 00:05:26.838 "reset": true, 00:05:26.838 "nvme_admin": false, 00:05:26.838 "nvme_io": false, 00:05:26.838 "nvme_io_md": false, 00:05:26.838 "write_zeroes": true, 00:05:26.838 "zcopy": true, 00:05:26.838 "get_zone_info": false, 00:05:26.838 "zone_management": false, 00:05:26.838 "zone_append": false, 00:05:26.838 "compare": false, 00:05:26.838 "compare_and_write": false, 00:05:26.838 "abort": true, 00:05:26.838 "seek_hole": false, 00:05:26.838 "seek_data": false, 00:05:26.838 "copy": true, 00:05:26.838 "nvme_iov_md": false 00:05:26.838 }, 00:05:26.838 "memory_domains": [ 00:05:26.838 { 00:05:26.838 "dma_device_id": "system", 00:05:26.838 "dma_device_type": 1 00:05:26.838 }, 00:05:26.838 { 00:05:26.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.838 "dma_device_type": 2 00:05:26.838 } 00:05:26.838 ], 00:05:26.838 "driver_specific": { 00:05:26.838 "passthru": { 00:05:26.838 "name": "Passthru0", 00:05:26.838 "base_bdev_name": "Malloc0" 00:05:26.839 } 00:05:26.839 } 00:05:26.839 } 00:05:26.839 ]' 00:05:26.839 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.097 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.097 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.097 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.097 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.097 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.097 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:27.097 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.097 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.097 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.097 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.097 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.097 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.097 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.097 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.097 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.097 05:30:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.097 00:05:27.097 real 0m0.282s 00:05:27.097 user 0m0.168s 00:05:27.097 sys 0m0.049s 00:05:27.097 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.098 05:30:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.098 ************************************ 00:05:27.098 END TEST rpc_integrity 00:05:27.098 ************************************ 00:05:27.098 05:30:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:27.098 05:30:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.098 05:30:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.098 05:30:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.098 ************************************ 00:05:27.098 START TEST rpc_plugins 00:05:27.098 ************************************ 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:27.098 05:30:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.098 05:30:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:27.098 05:30:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.098 05:30:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:27.098 { 00:05:27.098 "name": "Malloc1", 00:05:27.098 "aliases": [ 00:05:27.098 "2c6e2800-d113-4059-8f19-f06d4d64162a" 00:05:27.098 ], 00:05:27.098 "product_name": "Malloc disk", 00:05:27.098 "block_size": 4096, 00:05:27.098 "num_blocks": 256, 00:05:27.098 "uuid": "2c6e2800-d113-4059-8f19-f06d4d64162a", 00:05:27.098 "assigned_rate_limits": { 00:05:27.098 "rw_ios_per_sec": 0, 00:05:27.098 "rw_mbytes_per_sec": 0, 00:05:27.098 "r_mbytes_per_sec": 0, 00:05:27.098 "w_mbytes_per_sec": 0 00:05:27.098 }, 00:05:27.098 "claimed": false, 00:05:27.098 "zoned": false, 00:05:27.098 "supported_io_types": { 00:05:27.098 "read": true, 00:05:27.098 "write": true, 00:05:27.098 "unmap": true, 00:05:27.098 "flush": true, 00:05:27.098 "reset": true, 00:05:27.098 "nvme_admin": false, 00:05:27.098 "nvme_io": false, 00:05:27.098 "nvme_io_md": false, 00:05:27.098 "write_zeroes": true, 00:05:27.098 "zcopy": true, 00:05:27.098 "get_zone_info": false, 00:05:27.098 "zone_management": false, 00:05:27.098 "zone_append": false, 00:05:27.098 "compare": false, 00:05:27.098 "compare_and_write": false, 00:05:27.098 "abort": true, 00:05:27.098 "seek_hole": false, 00:05:27.098 "seek_data": false, 00:05:27.098 "copy": true, 00:05:27.098 "nvme_iov_md": false 00:05:27.098 }, 00:05:27.098 "memory_domains": [ 00:05:27.098 { 00:05:27.098 "dma_device_id": "system", 00:05:27.098 "dma_device_type": 1 00:05:27.098 }, 00:05:27.098 { 00:05:27.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.098 "dma_device_type": 2 00:05:27.098 } 00:05:27.098 ], 00:05:27.098 "driver_specific": {} 00:05:27.098 } 00:05:27.098 ]' 00:05:27.098 05:30:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:27.098 05:30:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:27.098 05:30:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.098 05:30:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.098 05:30:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.098 05:30:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:27.098 05:30:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:27.356 05:30:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:27.356 00:05:27.356 real 0m0.140s 00:05:27.356 user 0m0.089s 00:05:27.356 sys 0m0.017s 00:05:27.356 05:30:15 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.356 05:30:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.356 ************************************ 00:05:27.356 END TEST rpc_plugins 00:05:27.356 ************************************ 00:05:27.356 05:30:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:27.356 05:30:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.356 05:30:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.356 05:30:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.356 ************************************ 00:05:27.356 START TEST rpc_trace_cmd_test 00:05:27.356 ************************************ 00:05:27.356 05:30:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:27.356 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:27.357 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid998666", 00:05:27.357 "tpoint_group_mask": "0x8", 00:05:27.357 "iscsi_conn": { 00:05:27.357 "mask": "0x2", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "scsi": { 00:05:27.357 "mask": "0x4", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "bdev": { 00:05:27.357 "mask": "0x8", 00:05:27.357 "tpoint_mask": "0xffffffffffffffff" 00:05:27.357 }, 00:05:27.357 "nvmf_rdma": { 00:05:27.357 "mask": "0x10", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "nvmf_tcp": { 00:05:27.357 "mask": "0x20", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "ftl": { 00:05:27.357 "mask": "0x40", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "blobfs": { 00:05:27.357 "mask": "0x80", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "dsa": { 00:05:27.357 "mask": "0x200", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "thread": { 00:05:27.357 "mask": "0x400", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "nvme_pcie": { 00:05:27.357 "mask": "0x800", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "iaa": { 00:05:27.357 "mask": "0x1000", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "nvme_tcp": { 00:05:27.357 "mask": "0x2000", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "bdev_nvme": { 00:05:27.357 "mask": "0x4000", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "sock": { 00:05:27.357 "mask": "0x8000", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "blob": { 00:05:27.357 "mask": "0x10000", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "bdev_raid": { 00:05:27.357 "mask": "0x20000", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 }, 00:05:27.357 "scheduler": { 00:05:27.357 "mask": "0x40000", 00:05:27.357 "tpoint_mask": "0x0" 00:05:27.357 } 00:05:27.357 }' 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:27.357 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:27.615 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:27.615 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:27.615 05:30:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:27.615 00:05:27.615 real 0m0.225s 00:05:27.615 user 0m0.194s 00:05:27.615 sys 0m0.024s 00:05:27.615 05:30:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.615 05:30:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.615 ************************************ 00:05:27.615 END TEST rpc_trace_cmd_test 00:05:27.615 ************************************ 00:05:27.615 05:30:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:27.615 05:30:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:27.615 05:30:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:27.615 05:30:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.615 05:30:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.615 05:30:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.615 ************************************ 00:05:27.615 START TEST rpc_daemon_integrity 00:05:27.615 ************************************ 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.615 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.616 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.616 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.616 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.616 { 00:05:27.616 "name": "Malloc2", 00:05:27.616 "aliases": [ 00:05:27.616 "05f128c3-5221-49ed-bae5-06355f31e895" 00:05:27.616 ], 00:05:27.616 "product_name": "Malloc disk", 00:05:27.616 "block_size": 512, 00:05:27.616 "num_blocks": 16384, 00:05:27.616 "uuid": "05f128c3-5221-49ed-bae5-06355f31e895", 00:05:27.616 "assigned_rate_limits": { 00:05:27.616 "rw_ios_per_sec": 0, 00:05:27.616 "rw_mbytes_per_sec": 0, 00:05:27.616 "r_mbytes_per_sec": 0, 00:05:27.616 "w_mbytes_per_sec": 0 00:05:27.616 }, 00:05:27.616 "claimed": false, 00:05:27.616 "zoned": false, 00:05:27.616 "supported_io_types": { 00:05:27.616 "read": true, 00:05:27.616 "write": true, 00:05:27.616 "unmap": true, 00:05:27.616 "flush": true, 00:05:27.616 "reset": true, 00:05:27.616 "nvme_admin": false, 00:05:27.616 "nvme_io": false, 00:05:27.616 "nvme_io_md": false, 00:05:27.616 "write_zeroes": true, 00:05:27.616 "zcopy": true, 00:05:27.616 "get_zone_info": false, 00:05:27.616 "zone_management": false, 00:05:27.616 "zone_append": false, 00:05:27.616 "compare": false, 00:05:27.616 "compare_and_write": false, 00:05:27.616 "abort": true, 00:05:27.616 "seek_hole": false, 00:05:27.616 "seek_data": false, 00:05:27.616 "copy": true, 00:05:27.616 "nvme_iov_md": false 00:05:27.616 }, 00:05:27.616 "memory_domains": [ 00:05:27.616 { 00:05:27.616 "dma_device_id": "system", 00:05:27.616 "dma_device_type": 1 00:05:27.616 }, 00:05:27.616 { 00:05:27.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.616 "dma_device_type": 2 00:05:27.616 } 00:05:27.616 ], 00:05:27.616 "driver_specific": {} 00:05:27.616 } 00:05:27.616 ]' 00:05:27.616 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:27.616 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.616 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.616 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.616 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.875 [2024-12-10 05:30:15.510446] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.875 [2024-12-10 05:30:15.510474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.875 [2024-12-10 05:30:15.510485] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x89afe0 00:05:27.875 [2024-12-10 05:30:15.510491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.875 [2024-12-10 05:30:15.511449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.875 [2024-12-10 05:30:15.511469] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.875 Passthru0 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.875 { 00:05:27.875 "name": "Malloc2", 00:05:27.875 "aliases": [ 00:05:27.875 "05f128c3-5221-49ed-bae5-06355f31e895" 00:05:27.875 ], 00:05:27.875 "product_name": "Malloc disk", 00:05:27.875 "block_size": 512, 00:05:27.875 "num_blocks": 16384, 00:05:27.875 "uuid": "05f128c3-5221-49ed-bae5-06355f31e895", 00:05:27.875 "assigned_rate_limits": { 00:05:27.875 "rw_ios_per_sec": 0, 00:05:27.875 "rw_mbytes_per_sec": 0, 00:05:27.875 "r_mbytes_per_sec": 0, 00:05:27.875 "w_mbytes_per_sec": 0 00:05:27.875 }, 00:05:27.875 "claimed": true, 00:05:27.875 "claim_type": "exclusive_write", 00:05:27.875 "zoned": false, 00:05:27.875 "supported_io_types": { 00:05:27.875 "read": true, 00:05:27.875 "write": true, 00:05:27.875 "unmap": true, 00:05:27.875 "flush": true, 00:05:27.875 "reset": true, 00:05:27.875 "nvme_admin": false, 00:05:27.875 "nvme_io": false, 00:05:27.875 "nvme_io_md": false, 00:05:27.875 "write_zeroes": true, 00:05:27.875 "zcopy": true, 00:05:27.875 "get_zone_info": false, 00:05:27.875 "zone_management": false, 00:05:27.875 "zone_append": false, 00:05:27.875 "compare": false, 00:05:27.875 "compare_and_write": false, 00:05:27.875 "abort": true, 00:05:27.875 "seek_hole": false, 00:05:27.875 "seek_data": false, 00:05:27.875 "copy": true, 00:05:27.875 "nvme_iov_md": false 00:05:27.875 }, 00:05:27.875 "memory_domains": [ 00:05:27.875 { 00:05:27.875 "dma_device_id": "system", 00:05:27.875 "dma_device_type": 1 00:05:27.875 }, 00:05:27.875 { 00:05:27.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.875 "dma_device_type": 2 00:05:27.875 } 00:05:27.875 ], 00:05:27.875 "driver_specific": {} 00:05:27.875 }, 00:05:27.875 { 00:05:27.875 "name": "Passthru0", 00:05:27.875 "aliases": [ 00:05:27.875 "475594ca-09fe-50d3-ad15-d466b8a6b13d" 00:05:27.875 ], 00:05:27.875 "product_name": "passthru", 00:05:27.875 "block_size": 512, 00:05:27.875 "num_blocks": 16384, 00:05:27.875 "uuid": "475594ca-09fe-50d3-ad15-d466b8a6b13d", 00:05:27.875 "assigned_rate_limits": { 00:05:27.875 "rw_ios_per_sec": 0, 00:05:27.875 "rw_mbytes_per_sec": 0, 00:05:27.875 "r_mbytes_per_sec": 0, 00:05:27.875 "w_mbytes_per_sec": 0 00:05:27.875 }, 00:05:27.875 "claimed": false, 00:05:27.875 "zoned": false, 00:05:27.875 "supported_io_types": { 00:05:27.875 "read": true, 00:05:27.875 "write": true, 00:05:27.875 "unmap": true, 00:05:27.875 "flush": true, 00:05:27.875 "reset": true, 00:05:27.875 "nvme_admin": false, 00:05:27.875 "nvme_io": false, 00:05:27.875 "nvme_io_md": false, 00:05:27.875 "write_zeroes": true, 00:05:27.875 "zcopy": true, 00:05:27.875 "get_zone_info": false, 00:05:27.875 "zone_management": false, 00:05:27.875 "zone_append": false, 00:05:27.875 "compare": false, 00:05:27.875 "compare_and_write": false, 00:05:27.875 "abort": true, 00:05:27.875 "seek_hole": false, 00:05:27.875 "seek_data": false, 00:05:27.875 "copy": true, 00:05:27.875 "nvme_iov_md": false 00:05:27.875 }, 00:05:27.875 "memory_domains": [ 00:05:27.875 { 00:05:27.875 "dma_device_id": "system", 00:05:27.875 "dma_device_type": 1 00:05:27.875 }, 00:05:27.875 { 00:05:27.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.875 "dma_device_type": 2 00:05:27.875 } 00:05:27.875 ], 00:05:27.875 "driver_specific": { 00:05:27.875 "passthru": { 00:05:27.875 "name": "Passthru0", 00:05:27.875 "base_bdev_name": "Malloc2" 00:05:27.875 } 00:05:27.875 } 00:05:27.875 } 00:05:27.875 ]' 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.875 00:05:27.875 real 0m0.273s 00:05:27.875 user 0m0.174s 00:05:27.875 sys 0m0.037s 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.875 05:30:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.875 ************************************ 00:05:27.875 END TEST rpc_daemon_integrity 00:05:27.875 ************************************ 00:05:27.875 05:30:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.875 05:30:15 rpc -- rpc/rpc.sh@84 -- # killprocess 998666 00:05:27.875 05:30:15 rpc -- common/autotest_common.sh@954 -- # '[' -z 998666 ']' 00:05:27.875 05:30:15 rpc -- common/autotest_common.sh@958 -- # kill -0 998666 00:05:27.875 05:30:15 rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.875 05:30:15 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.875 05:30:15 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 998666 00:05:27.875 05:30:15 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.875 05:30:15 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.875 05:30:15 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 998666' 00:05:27.875 killing process with pid 998666 00:05:27.875 05:30:15 rpc -- common/autotest_common.sh@973 -- # kill 998666 00:05:27.875 05:30:15 rpc -- common/autotest_common.sh@978 -- # wait 998666 00:05:28.442 00:05:28.442 real 0m2.108s 00:05:28.442 user 0m2.688s 00:05:28.442 sys 0m0.701s 00:05:28.442 05:30:16 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.442 05:30:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.442 ************************************ 00:05:28.442 END TEST rpc 00:05:28.442 ************************************ 00:05:28.442 05:30:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:28.442 05:30:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.442 05:30:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.442 05:30:16 -- common/autotest_common.sh@10 -- # set +x 00:05:28.442 ************************************ 00:05:28.442 START TEST skip_rpc 00:05:28.442 ************************************ 00:05:28.442 05:30:16 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:28.442 * Looking for test storage... 00:05:28.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.443 05:30:16 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.443 --rc genhtml_branch_coverage=1 00:05:28.443 --rc genhtml_function_coverage=1 00:05:28.443 --rc genhtml_legend=1 00:05:28.443 --rc geninfo_all_blocks=1 00:05:28.443 --rc geninfo_unexecuted_blocks=1 00:05:28.443 00:05:28.443 ' 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.443 --rc genhtml_branch_coverage=1 00:05:28.443 --rc genhtml_function_coverage=1 00:05:28.443 --rc genhtml_legend=1 00:05:28.443 --rc geninfo_all_blocks=1 00:05:28.443 --rc geninfo_unexecuted_blocks=1 00:05:28.443 00:05:28.443 ' 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.443 --rc genhtml_branch_coverage=1 00:05:28.443 --rc genhtml_function_coverage=1 00:05:28.443 --rc genhtml_legend=1 00:05:28.443 --rc geninfo_all_blocks=1 00:05:28.443 --rc geninfo_unexecuted_blocks=1 00:05:28.443 00:05:28.443 ' 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.443 --rc genhtml_branch_coverage=1 00:05:28.443 --rc genhtml_function_coverage=1 00:05:28.443 --rc genhtml_legend=1 00:05:28.443 --rc geninfo_all_blocks=1 00:05:28.443 --rc geninfo_unexecuted_blocks=1 00:05:28.443 00:05:28.443 ' 00:05:28.443 05:30:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:28.443 05:30:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:28.443 05:30:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.443 05:30:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.443 ************************************ 00:05:28.443 START TEST skip_rpc 00:05:28.443 ************************************ 00:05:28.443 05:30:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:28.443 05:30:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=999287 00:05:28.443 05:30:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.443 05:30:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:28.443 05:30:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:28.702 [2024-12-10 05:30:16.362408] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:05:28.702 [2024-12-10 05:30:16.362455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999287 ] 00:05:28.702 [2024-12-10 05:30:16.435604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.702 [2024-12-10 05:30:16.473767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 999287 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 999287 ']' 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 999287 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 999287 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 999287' 00:05:34.098 killing process with pid 999287 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 999287 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 999287 00:05:34.098 00:05:34.098 real 0m5.356s 00:05:34.098 user 0m5.118s 00:05:34.098 sys 0m0.272s 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.098 05:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.098 ************************************ 00:05:34.098 END TEST skip_rpc 00:05:34.098 ************************************ 00:05:34.098 05:30:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:34.098 05:30:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.098 05:30:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.098 05:30:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.098 ************************************ 00:05:34.098 START TEST skip_rpc_with_json 00:05:34.098 ************************************ 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1000209 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1000209 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1000209 ']' 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.098 05:30:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.098 [2024-12-10 05:30:21.790010] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:05:34.098 [2024-12-10 05:30:21.790051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000209 ] 00:05:34.099 [2024-12-10 05:30:21.865278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.099 [2024-12-10 05:30:21.907334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.367 [2024-12-10 05:30:22.124669] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:34.367 request: 00:05:34.367 { 00:05:34.367 "trtype": "tcp", 00:05:34.367 "method": "nvmf_get_transports", 00:05:34.367 "req_id": 1 00:05:34.367 } 00:05:34.367 Got JSON-RPC error response 00:05:34.367 response: 00:05:34.367 { 00:05:34.367 "code": -19, 00:05:34.367 "message": "No such device" 00:05:34.367 } 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.367 [2024-12-10 05:30:22.136787] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.367 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.626 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.626 05:30:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.626 { 00:05:34.626 "subsystems": [ 00:05:34.626 { 00:05:34.626 "subsystem": "fsdev", 00:05:34.626 "config": [ 00:05:34.626 { 00:05:34.626 "method": "fsdev_set_opts", 00:05:34.626 "params": { 00:05:34.626 "fsdev_io_pool_size": 65535, 00:05:34.626 "fsdev_io_cache_size": 256 00:05:34.626 } 00:05:34.626 } 00:05:34.626 ] 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "vfio_user_target", 00:05:34.626 "config": null 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "keyring", 00:05:34.626 "config": [] 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "iobuf", 00:05:34.626 "config": [ 00:05:34.626 { 00:05:34.626 "method": "iobuf_set_options", 00:05:34.626 "params": { 00:05:34.626 "small_pool_count": 8192, 00:05:34.626 "large_pool_count": 1024, 00:05:34.626 "small_bufsize": 8192, 00:05:34.626 "large_bufsize": 135168, 00:05:34.626 "enable_numa": false 00:05:34.626 } 00:05:34.626 } 00:05:34.626 ] 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "sock", 00:05:34.626 "config": [ 00:05:34.626 { 00:05:34.626 "method": "sock_set_default_impl", 00:05:34.626 "params": { 00:05:34.626 "impl_name": "posix" 00:05:34.626 } 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "method": "sock_impl_set_options", 00:05:34.626 "params": { 00:05:34.626 "impl_name": "ssl", 00:05:34.626 "recv_buf_size": 4096, 00:05:34.626 "send_buf_size": 4096, 00:05:34.626 "enable_recv_pipe": true, 00:05:34.626 "enable_quickack": false, 00:05:34.626 "enable_placement_id": 0, 00:05:34.626 "enable_zerocopy_send_server": true, 00:05:34.626 "enable_zerocopy_send_client": false, 00:05:34.626 "zerocopy_threshold": 0, 00:05:34.626 "tls_version": 0, 00:05:34.626 "enable_ktls": false 00:05:34.626 } 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "method": "sock_impl_set_options", 00:05:34.626 "params": { 00:05:34.626 "impl_name": "posix", 00:05:34.626 "recv_buf_size": 2097152, 00:05:34.626 "send_buf_size": 2097152, 00:05:34.626 "enable_recv_pipe": true, 00:05:34.626 "enable_quickack": false, 00:05:34.626 "enable_placement_id": 0, 00:05:34.626 "enable_zerocopy_send_server": true, 00:05:34.626 "enable_zerocopy_send_client": false, 00:05:34.626 "zerocopy_threshold": 0, 00:05:34.626 "tls_version": 0, 00:05:34.626 "enable_ktls": false 00:05:34.626 } 00:05:34.626 } 00:05:34.626 ] 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "vmd", 00:05:34.626 "config": [] 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "accel", 00:05:34.626 "config": [ 00:05:34.626 { 00:05:34.626 "method": "accel_set_options", 00:05:34.626 "params": { 00:05:34.626 "small_cache_size": 128, 00:05:34.626 "large_cache_size": 16, 00:05:34.626 "task_count": 2048, 00:05:34.626 "sequence_count": 2048, 00:05:34.626 "buf_count": 2048 00:05:34.626 } 00:05:34.626 } 00:05:34.626 ] 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "bdev", 00:05:34.626 "config": [ 00:05:34.626 { 00:05:34.626 "method": "bdev_set_options", 00:05:34.626 "params": { 00:05:34.626 "bdev_io_pool_size": 65535, 00:05:34.626 "bdev_io_cache_size": 256, 00:05:34.626 "bdev_auto_examine": true, 00:05:34.626 "iobuf_small_cache_size": 128, 00:05:34.626 "iobuf_large_cache_size": 16 00:05:34.626 } 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "method": "bdev_raid_set_options", 00:05:34.626 "params": { 00:05:34.626 "process_window_size_kb": 1024, 00:05:34.626 "process_max_bandwidth_mb_sec": 0 00:05:34.626 } 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "method": "bdev_iscsi_set_options", 00:05:34.626 "params": { 00:05:34.626 "timeout_sec": 30 00:05:34.626 } 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "method": "bdev_nvme_set_options", 00:05:34.626 "params": { 00:05:34.626 "action_on_timeout": "none", 00:05:34.626 "timeout_us": 0, 00:05:34.626 "timeout_admin_us": 0, 00:05:34.626 "keep_alive_timeout_ms": 10000, 00:05:34.626 "arbitration_burst": 0, 00:05:34.626 "low_priority_weight": 0, 00:05:34.626 "medium_priority_weight": 0, 00:05:34.626 "high_priority_weight": 0, 00:05:34.626 "nvme_adminq_poll_period_us": 10000, 00:05:34.626 "nvme_ioq_poll_period_us": 0, 00:05:34.626 "io_queue_requests": 0, 00:05:34.626 "delay_cmd_submit": true, 00:05:34.626 "transport_retry_count": 4, 00:05:34.626 "bdev_retry_count": 3, 00:05:34.626 "transport_ack_timeout": 0, 00:05:34.626 "ctrlr_loss_timeout_sec": 0, 00:05:34.626 "reconnect_delay_sec": 0, 00:05:34.626 "fast_io_fail_timeout_sec": 0, 00:05:34.626 "disable_auto_failback": false, 00:05:34.626 "generate_uuids": false, 00:05:34.626 "transport_tos": 0, 00:05:34.626 "nvme_error_stat": false, 00:05:34.626 "rdma_srq_size": 0, 00:05:34.626 "io_path_stat": false, 00:05:34.626 "allow_accel_sequence": false, 00:05:34.626 "rdma_max_cq_size": 0, 00:05:34.626 "rdma_cm_event_timeout_ms": 0, 00:05:34.626 "dhchap_digests": [ 00:05:34.626 "sha256", 00:05:34.626 "sha384", 00:05:34.626 "sha512" 00:05:34.626 ], 00:05:34.626 "dhchap_dhgroups": [ 00:05:34.626 "null", 00:05:34.626 "ffdhe2048", 00:05:34.626 "ffdhe3072", 00:05:34.626 "ffdhe4096", 00:05:34.626 "ffdhe6144", 00:05:34.626 "ffdhe8192" 00:05:34.626 ] 00:05:34.626 } 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "method": "bdev_nvme_set_hotplug", 00:05:34.626 "params": { 00:05:34.626 "period_us": 100000, 00:05:34.626 "enable": false 00:05:34.626 } 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "method": "bdev_wait_for_examine" 00:05:34.626 } 00:05:34.626 ] 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "scsi", 00:05:34.626 "config": null 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "scheduler", 00:05:34.626 "config": [ 00:05:34.626 { 00:05:34.626 "method": "framework_set_scheduler", 00:05:34.626 "params": { 00:05:34.626 "name": "static" 00:05:34.626 } 00:05:34.626 } 00:05:34.626 ] 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "vhost_scsi", 00:05:34.626 "config": [] 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "vhost_blk", 00:05:34.626 "config": [] 00:05:34.626 }, 00:05:34.626 { 00:05:34.626 "subsystem": "ublk", 00:05:34.627 "config": [] 00:05:34.627 }, 00:05:34.627 { 00:05:34.627 "subsystem": "nbd", 00:05:34.627 "config": [] 00:05:34.627 }, 00:05:34.627 { 00:05:34.627 "subsystem": "nvmf", 00:05:34.627 "config": [ 00:05:34.627 { 00:05:34.627 "method": "nvmf_set_config", 00:05:34.627 "params": { 00:05:34.627 "discovery_filter": "match_any", 00:05:34.627 "admin_cmd_passthru": { 00:05:34.627 "identify_ctrlr": false 00:05:34.627 }, 00:05:34.627 "dhchap_digests": [ 00:05:34.627 "sha256", 00:05:34.627 "sha384", 00:05:34.627 "sha512" 00:05:34.627 ], 00:05:34.627 "dhchap_dhgroups": [ 00:05:34.627 "null", 00:05:34.627 "ffdhe2048", 00:05:34.627 "ffdhe3072", 00:05:34.627 "ffdhe4096", 00:05:34.627 "ffdhe6144", 00:05:34.627 "ffdhe8192" 00:05:34.627 ] 00:05:34.627 } 00:05:34.627 }, 00:05:34.627 { 00:05:34.627 "method": "nvmf_set_max_subsystems", 00:05:34.627 "params": { 00:05:34.627 "max_subsystems": 1024 00:05:34.627 } 00:05:34.627 }, 00:05:34.627 { 00:05:34.627 "method": "nvmf_set_crdt", 00:05:34.627 "params": { 00:05:34.627 "crdt1": 0, 00:05:34.627 "crdt2": 0, 00:05:34.627 "crdt3": 0 00:05:34.627 } 00:05:34.627 }, 00:05:34.627 { 00:05:34.627 "method": "nvmf_create_transport", 00:05:34.627 "params": { 00:05:34.627 "trtype": "TCP", 00:05:34.627 "max_queue_depth": 128, 00:05:34.627 "max_io_qpairs_per_ctrlr": 127, 00:05:34.627 "in_capsule_data_size": 4096, 00:05:34.627 "max_io_size": 131072, 00:05:34.627 "io_unit_size": 131072, 00:05:34.627 "max_aq_depth": 128, 00:05:34.627 "num_shared_buffers": 511, 00:05:34.627 "buf_cache_size": 4294967295, 00:05:34.627 "dif_insert_or_strip": false, 00:05:34.627 "zcopy": false, 00:05:34.627 "c2h_success": true, 00:05:34.627 "sock_priority": 0, 00:05:34.627 "abort_timeout_sec": 1, 00:05:34.627 "ack_timeout": 0, 00:05:34.627 "data_wr_pool_size": 0 00:05:34.627 } 00:05:34.627 } 00:05:34.627 ] 00:05:34.627 }, 00:05:34.627 { 00:05:34.627 "subsystem": "iscsi", 00:05:34.627 "config": [ 00:05:34.627 { 00:05:34.627 "method": "iscsi_set_options", 00:05:34.627 "params": { 00:05:34.627 "node_base": "iqn.2016-06.io.spdk", 00:05:34.627 "max_sessions": 128, 00:05:34.627 "max_connections_per_session": 2, 00:05:34.627 "max_queue_depth": 64, 00:05:34.627 "default_time2wait": 2, 00:05:34.627 "default_time2retain": 20, 00:05:34.627 "first_burst_length": 8192, 00:05:34.627 "immediate_data": true, 00:05:34.627 "allow_duplicated_isid": false, 00:05:34.627 "error_recovery_level": 0, 00:05:34.627 "nop_timeout": 60, 00:05:34.627 "nop_in_interval": 30, 00:05:34.627 "disable_chap": false, 00:05:34.627 "require_chap": false, 00:05:34.627 "mutual_chap": false, 00:05:34.627 "chap_group": 0, 00:05:34.627 "max_large_datain_per_connection": 64, 00:05:34.627 "max_r2t_per_connection": 4, 00:05:34.627 "pdu_pool_size": 36864, 00:05:34.627 "immediate_data_pool_size": 16384, 00:05:34.627 "data_out_pool_size": 2048 00:05:34.627 } 00:05:34.627 } 00:05:34.627 ] 00:05:34.627 } 00:05:34.627 ] 00:05:34.627 } 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1000209 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1000209 ']' 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1000209 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1000209 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1000209' 00:05:34.627 killing process with pid 1000209 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1000209 00:05:34.627 05:30:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1000209 00:05:34.886 05:30:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1000418 00:05:34.886 05:30:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:34.886 05:30:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1000418 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1000418 ']' 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1000418 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1000418 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1000418' 00:05:40.154 killing process with pid 1000418 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1000418 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1000418 00:05:40.154 05:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:40.154 05:30:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:40.154 00:05:40.154 real 0m6.269s 00:05:40.154 user 0m5.970s 00:05:40.154 sys 0m0.587s 00:05:40.154 05:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.154 05:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.154 ************************************ 00:05:40.154 END TEST skip_rpc_with_json 00:05:40.154 ************************************ 00:05:40.154 05:30:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:40.154 05:30:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.154 05:30:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.154 05:30:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.412 ************************************ 00:05:40.412 START TEST skip_rpc_with_delay 00:05:40.412 ************************************ 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.412 [2024-12-10 05:30:28.132093] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.412 00:05:40.412 real 0m0.067s 00:05:40.412 user 0m0.042s 00:05:40.412 sys 0m0.025s 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.412 05:30:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:40.412 ************************************ 00:05:40.412 END TEST skip_rpc_with_delay 00:05:40.412 ************************************ 00:05:40.412 05:30:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:40.412 05:30:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:40.412 05:30:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:40.412 05:30:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.412 05:30:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.412 05:30:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.412 ************************************ 00:05:40.412 START TEST exit_on_failed_rpc_init 00:05:40.412 ************************************ 00:05:40.412 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:40.412 05:30:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1001395 00:05:40.412 05:30:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1001395 00:05:40.412 05:30:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.412 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1001395 ']' 00:05:40.412 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.412 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.412 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.412 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.412 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.412 [2024-12-10 05:30:28.269643] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:05:40.412 [2024-12-10 05:30:28.269685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001395 ] 00:05:40.671 [2024-12-10 05:30:28.341242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.671 [2024-12-10 05:30:28.381910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:40.930 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.930 [2024-12-10 05:30:28.644974] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:05:40.930 [2024-12-10 05:30:28.645019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001404 ] 00:05:40.930 [2024-12-10 05:30:28.718895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.931 [2024-12-10 05:30:28.757915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.931 [2024-12-10 05:30:28.757968] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:40.931 [2024-12-10 05:30:28.757977] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:40.931 [2024-12-10 05:30:28.757983] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1001395 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1001395 ']' 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1001395 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.931 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1001395 00:05:41.190 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.190 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.190 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1001395' 00:05:41.190 killing process with pid 1001395 00:05:41.190 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1001395 00:05:41.190 05:30:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1001395 00:05:41.449 00:05:41.449 real 0m0.935s 00:05:41.449 user 0m1.001s 00:05:41.449 sys 0m0.379s 00:05:41.449 05:30:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.449 05:30:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.449 ************************************ 00:05:41.449 END TEST exit_on_failed_rpc_init 00:05:41.449 ************************************ 00:05:41.449 05:30:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:41.449 00:05:41.449 real 0m13.081s 00:05:41.449 user 0m12.328s 00:05:41.449 sys 0m1.552s 00:05:41.449 05:30:29 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.449 05:30:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.449 ************************************ 00:05:41.449 END TEST skip_rpc 00:05:41.449 ************************************ 00:05:41.449 05:30:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.449 05:30:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.449 05:30:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.449 05:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:41.449 ************************************ 00:05:41.449 START TEST rpc_client 00:05:41.449 ************************************ 00:05:41.449 05:30:29 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.709 * Looking for test storage... 00:05:41.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:41.709 05:30:29 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.709 05:30:29 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.709 05:30:29 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.709 05:30:29 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.709 05:30:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:41.709 05:30:29 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.709 05:30:29 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.709 --rc genhtml_branch_coverage=1 00:05:41.709 --rc genhtml_function_coverage=1 00:05:41.709 --rc genhtml_legend=1 00:05:41.709 --rc geninfo_all_blocks=1 00:05:41.709 --rc geninfo_unexecuted_blocks=1 00:05:41.709 00:05:41.709 ' 00:05:41.709 05:30:29 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.709 --rc genhtml_branch_coverage=1 00:05:41.709 --rc genhtml_function_coverage=1 00:05:41.709 --rc genhtml_legend=1 00:05:41.709 --rc geninfo_all_blocks=1 00:05:41.709 --rc geninfo_unexecuted_blocks=1 00:05:41.709 00:05:41.709 ' 00:05:41.709 05:30:29 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.709 --rc genhtml_branch_coverage=1 00:05:41.709 --rc genhtml_function_coverage=1 00:05:41.709 --rc genhtml_legend=1 00:05:41.709 --rc geninfo_all_blocks=1 00:05:41.709 --rc geninfo_unexecuted_blocks=1 00:05:41.709 00:05:41.709 ' 00:05:41.709 05:30:29 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.709 --rc genhtml_branch_coverage=1 00:05:41.709 --rc genhtml_function_coverage=1 00:05:41.709 --rc genhtml_legend=1 00:05:41.709 --rc geninfo_all_blocks=1 00:05:41.709 --rc geninfo_unexecuted_blocks=1 00:05:41.709 00:05:41.709 ' 00:05:41.709 05:30:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:41.709 OK 00:05:41.709 05:30:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:41.709 00:05:41.709 real 0m0.203s 00:05:41.709 user 0m0.126s 00:05:41.709 sys 0m0.091s 00:05:41.709 05:30:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.709 05:30:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:41.709 ************************************ 00:05:41.709 END TEST rpc_client 00:05:41.709 ************************************ 00:05:41.710 05:30:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.710 05:30:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.710 05:30:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.710 05:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:41.710 ************************************ 00:05:41.710 START TEST json_config 00:05:41.710 ************************************ 00:05:41.710 05:30:29 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.969 05:30:29 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.969 05:30:29 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.969 05:30:29 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.969 05:30:29 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.969 05:30:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.969 05:30:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.969 05:30:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.969 05:30:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.969 05:30:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.969 05:30:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.969 05:30:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.969 05:30:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.969 05:30:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.969 05:30:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.969 05:30:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.969 05:30:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:41.969 05:30:29 json_config -- scripts/common.sh@345 -- # : 1 00:05:41.969 05:30:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.969 05:30:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.969 05:30:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:41.969 05:30:29 json_config -- scripts/common.sh@353 -- # local d=1 00:05:41.969 05:30:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.969 05:30:29 json_config -- scripts/common.sh@355 -- # echo 1 00:05:41.969 05:30:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.969 05:30:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:41.969 05:30:29 json_config -- scripts/common.sh@353 -- # local d=2 00:05:41.969 05:30:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.969 05:30:29 json_config -- scripts/common.sh@355 -- # echo 2 00:05:41.969 05:30:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.969 05:30:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.969 05:30:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.969 05:30:29 json_config -- scripts/common.sh@368 -- # return 0 00:05:41.969 05:30:29 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.969 05:30:29 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.969 --rc genhtml_branch_coverage=1 00:05:41.969 --rc genhtml_function_coverage=1 00:05:41.969 --rc genhtml_legend=1 00:05:41.969 --rc geninfo_all_blocks=1 00:05:41.969 --rc geninfo_unexecuted_blocks=1 00:05:41.969 00:05:41.969 ' 00:05:41.969 05:30:29 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.969 --rc genhtml_branch_coverage=1 00:05:41.969 --rc genhtml_function_coverage=1 00:05:41.969 --rc genhtml_legend=1 00:05:41.969 --rc geninfo_all_blocks=1 00:05:41.969 --rc geninfo_unexecuted_blocks=1 00:05:41.969 00:05:41.969 ' 00:05:41.969 05:30:29 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.969 --rc genhtml_branch_coverage=1 00:05:41.969 --rc genhtml_function_coverage=1 00:05:41.969 --rc genhtml_legend=1 00:05:41.969 --rc geninfo_all_blocks=1 00:05:41.969 --rc geninfo_unexecuted_blocks=1 00:05:41.969 00:05:41.969 ' 00:05:41.969 05:30:29 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.969 --rc genhtml_branch_coverage=1 00:05:41.969 --rc genhtml_function_coverage=1 00:05:41.969 --rc genhtml_legend=1 00:05:41.969 --rc geninfo_all_blocks=1 00:05:41.969 --rc geninfo_unexecuted_blocks=1 00:05:41.969 00:05:41.969 ' 00:05:41.969 05:30:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.969 05:30:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:41.969 05:30:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.969 05:30:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.969 05:30:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.969 05:30:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.969 05:30:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.970 05:30:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.970 05:30:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.970 05:30:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.970 05:30:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.970 05:30:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.970 05:30:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.970 05:30:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.970 05:30:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:41.970 05:30:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@51 -- # : 0 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.970 05:30:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:41.970 INFO: JSON configuration test init 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:41.970 05:30:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.970 05:30:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:41.970 05:30:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.970 05:30:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.970 05:30:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:41.970 05:30:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:41.970 05:30:29 json_config -- json_config/common.sh@10 -- # shift 00:05:41.970 05:30:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.970 05:30:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.970 05:30:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.970 05:30:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.970 05:30:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.970 05:30:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1001752 00:05:41.970 05:30:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.970 Waiting for target to run... 00:05:41.970 05:30:29 json_config -- json_config/common.sh@25 -- # waitforlisten 1001752 /var/tmp/spdk_tgt.sock 00:05:41.970 05:30:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 1001752 ']' 00:05:41.970 05:30:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:41.970 05:30:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.970 05:30:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.970 05:30:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.970 05:30:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.970 05:30:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.970 [2024-12-10 05:30:29.776812] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:05:41.970 [2024-12-10 05:30:29.776861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001752 ] 00:05:42.230 [2024-12-10 05:30:30.071873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.230 [2024-12-10 05:30:30.109452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.797 05:30:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.797 05:30:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:42.797 05:30:30 json_config -- json_config/common.sh@26 -- # echo '' 00:05:42.797 00:05:42.797 05:30:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:42.797 05:30:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:42.797 05:30:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.797 05:30:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.797 05:30:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:42.797 05:30:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:42.797 05:30:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.797 05:30:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.797 05:30:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:42.797 05:30:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:42.797 05:30:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:46.085 05:30:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.085 05:30:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:46.085 05:30:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@54 -- # sort 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:46.085 05:30:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.085 05:30:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:46.085 05:30:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.085 05:30:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:46.085 05:30:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:46.085 05:30:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:46.344 MallocForNvmf0 00:05:46.344 05:30:34 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:46.344 05:30:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:46.602 MallocForNvmf1 00:05:46.602 05:30:34 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.602 05:30:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.861 [2024-12-10 05:30:34.535107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.861 05:30:34 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.861 05:30:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:47.120 05:30:34 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:47.120 05:30:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:47.120 05:30:34 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.120 05:30:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.378 05:30:35 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.378 05:30:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.638 [2024-12-10 05:30:35.333517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:47.638 05:30:35 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:47.638 05:30:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.638 05:30:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.638 05:30:35 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:47.638 05:30:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.638 05:30:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.638 05:30:35 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:47.639 05:30:35 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.639 05:30:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.897 MallocBdevForConfigChangeCheck 00:05:47.898 05:30:35 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:47.898 05:30:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.898 05:30:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.898 05:30:35 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:47.898 05:30:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.156 05:30:35 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:48.156 INFO: shutting down applications... 00:05:48.156 05:30:35 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:48.156 05:30:35 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:48.156 05:30:35 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:48.156 05:30:35 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:50.058 Calling clear_iscsi_subsystem 00:05:50.058 Calling clear_nvmf_subsystem 00:05:50.058 Calling clear_nbd_subsystem 00:05:50.058 Calling clear_ublk_subsystem 00:05:50.058 Calling clear_vhost_blk_subsystem 00:05:50.058 Calling clear_vhost_scsi_subsystem 00:05:50.058 Calling clear_bdev_subsystem 00:05:50.058 05:30:37 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:50.058 05:30:37 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:50.058 05:30:37 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:50.058 05:30:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.058 05:30:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:50.058 05:30:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:50.317 05:30:38 json_config -- json_config/json_config.sh@352 -- # break 00:05:50.317 05:30:38 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:50.317 05:30:38 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:50.317 05:30:38 json_config -- json_config/common.sh@31 -- # local app=target 00:05:50.317 05:30:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.317 05:30:38 json_config -- json_config/common.sh@35 -- # [[ -n 1001752 ]] 00:05:50.317 05:30:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1001752 00:05:50.317 05:30:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.317 05:30:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.317 05:30:38 json_config -- json_config/common.sh@41 -- # kill -0 1001752 00:05:50.317 05:30:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.885 05:30:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.885 05:30:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.885 05:30:38 json_config -- json_config/common.sh@41 -- # kill -0 1001752 00:05:50.885 05:30:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.885 05:30:38 json_config -- json_config/common.sh@43 -- # break 00:05:50.886 05:30:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.886 05:30:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.886 SPDK target shutdown done 00:05:50.886 05:30:38 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:50.886 INFO: relaunching applications... 00:05:50.886 05:30:38 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.886 05:30:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:50.886 05:30:38 json_config -- json_config/common.sh@10 -- # shift 00:05:50.886 05:30:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.886 05:30:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.886 05:30:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.886 05:30:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.886 05:30:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.886 05:30:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1003245 00:05:50.886 05:30:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.886 Waiting for target to run... 00:05:50.886 05:30:38 json_config -- json_config/common.sh@25 -- # waitforlisten 1003245 /var/tmp/spdk_tgt.sock 00:05:50.886 05:30:38 json_config -- common/autotest_common.sh@835 -- # '[' -z 1003245 ']' 00:05:50.886 05:30:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.886 05:30:38 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.886 05:30:38 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.886 05:30:38 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.886 05:30:38 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.886 05:30:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.886 [2024-12-10 05:30:38.568095] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:05:50.886 [2024-12-10 05:30:38.568153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003245 ] 00:05:51.144 [2024-12-10 05:30:39.031768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.403 [2024-12-10 05:30:39.086839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.691 [2024-12-10 05:30:42.112983] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.691 [2024-12-10 05:30:42.145269] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:54.950 05:30:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.950 05:30:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:54.950 05:30:42 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.950 00:05:54.950 05:30:42 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:54.950 05:30:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:54.950 INFO: Checking if target configuration is the same... 00:05:54.950 05:30:42 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.950 05:30:42 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:54.950 05:30:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.950 + '[' 2 -ne 2 ']' 00:05:54.950 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:54.950 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:54.950 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:54.950 +++ basename /dev/fd/62 00:05:54.950 ++ mktemp /tmp/62.XXX 00:05:54.950 + tmp_file_1=/tmp/62.LqB 00:05:54.950 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.950 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.950 + tmp_file_2=/tmp/spdk_tgt_config.json.TWL 00:05:54.950 + ret=0 00:05:54.950 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.518 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.518 + diff -u /tmp/62.LqB /tmp/spdk_tgt_config.json.TWL 00:05:55.518 + echo 'INFO: JSON config files are the same' 00:05:55.518 INFO: JSON config files are the same 00:05:55.518 + rm /tmp/62.LqB /tmp/spdk_tgt_config.json.TWL 00:05:55.518 + exit 0 00:05:55.518 05:30:43 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:55.518 05:30:43 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:55.518 INFO: changing configuration and checking if this can be detected... 00:05:55.518 05:30:43 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.518 05:30:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.518 05:30:43 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.518 05:30:43 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:55.518 05:30:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.518 + '[' 2 -ne 2 ']' 00:05:55.518 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:55.518 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:55.518 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:55.518 +++ basename /dev/fd/62 00:05:55.518 ++ mktemp /tmp/62.XXX 00:05:55.518 + tmp_file_1=/tmp/62.pd1 00:05:55.777 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.777 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.777 + tmp_file_2=/tmp/spdk_tgt_config.json.oWp 00:05:55.777 + ret=0 00:05:55.777 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.036 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.036 + diff -u /tmp/62.pd1 /tmp/spdk_tgt_config.json.oWp 00:05:56.036 + ret=1 00:05:56.036 + echo '=== Start of file: /tmp/62.pd1 ===' 00:05:56.036 + cat /tmp/62.pd1 00:05:56.036 + echo '=== End of file: /tmp/62.pd1 ===' 00:05:56.036 + echo '' 00:05:56.036 + echo '=== Start of file: /tmp/spdk_tgt_config.json.oWp ===' 00:05:56.036 + cat /tmp/spdk_tgt_config.json.oWp 00:05:56.036 + echo '=== End of file: /tmp/spdk_tgt_config.json.oWp ===' 00:05:56.036 + echo '' 00:05:56.036 + rm /tmp/62.pd1 /tmp/spdk_tgt_config.json.oWp 00:05:56.036 + exit 1 00:05:56.036 05:30:43 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:56.036 INFO: configuration change detected. 00:05:56.036 05:30:43 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:56.036 05:30:43 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:56.036 05:30:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.036 05:30:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.036 05:30:43 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:56.037 05:30:43 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:56.037 05:30:43 json_config -- json_config/json_config.sh@324 -- # [[ -n 1003245 ]] 00:05:56.037 05:30:43 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:56.037 05:30:43 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.037 05:30:43 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:56.037 05:30:43 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:56.037 05:30:43 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:56.037 05:30:43 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:56.037 05:30:43 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:56.037 05:30:43 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.037 05:30:43 json_config -- json_config/json_config.sh@330 -- # killprocess 1003245 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@954 -- # '[' -z 1003245 ']' 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@958 -- # kill -0 1003245 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@959 -- # uname 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1003245 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1003245' 00:05:56.037 killing process with pid 1003245 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@973 -- # kill 1003245 00:05:56.037 05:30:43 json_config -- common/autotest_common.sh@978 -- # wait 1003245 00:05:57.950 05:30:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.950 05:30:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:57.950 05:30:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:57.950 05:30:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.950 05:30:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:57.950 05:30:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:57.950 INFO: Success 00:05:57.950 00:05:57.950 real 0m15.905s 00:05:57.950 user 0m16.481s 00:05:57.950 sys 0m2.599s 00:05:57.950 05:30:45 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.950 05:30:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.950 ************************************ 00:05:57.950 END TEST json_config 00:05:57.950 ************************************ 00:05:57.950 05:30:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.950 05:30:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.950 05:30:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.950 05:30:45 -- common/autotest_common.sh@10 -- # set +x 00:05:57.950 ************************************ 00:05:57.950 START TEST json_config_extra_key 00:05:57.950 ************************************ 00:05:57.950 05:30:45 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.950 05:30:45 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.950 05:30:45 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.950 05:30:45 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.950 05:30:45 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.950 05:30:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:57.950 05:30:45 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.950 05:30:45 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.950 --rc genhtml_branch_coverage=1 00:05:57.950 --rc genhtml_function_coverage=1 00:05:57.950 --rc genhtml_legend=1 00:05:57.950 --rc geninfo_all_blocks=1 00:05:57.950 --rc geninfo_unexecuted_blocks=1 00:05:57.950 00:05:57.950 ' 00:05:57.950 05:30:45 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.950 --rc genhtml_branch_coverage=1 00:05:57.950 --rc genhtml_function_coverage=1 00:05:57.950 --rc genhtml_legend=1 00:05:57.950 --rc geninfo_all_blocks=1 00:05:57.950 --rc geninfo_unexecuted_blocks=1 00:05:57.950 00:05:57.950 ' 00:05:57.950 05:30:45 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.950 --rc genhtml_branch_coverage=1 00:05:57.950 --rc genhtml_function_coverage=1 00:05:57.950 --rc genhtml_legend=1 00:05:57.950 --rc geninfo_all_blocks=1 00:05:57.950 --rc geninfo_unexecuted_blocks=1 00:05:57.950 00:05:57.950 ' 00:05:57.950 05:30:45 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.950 --rc genhtml_branch_coverage=1 00:05:57.950 --rc genhtml_function_coverage=1 00:05:57.950 --rc genhtml_legend=1 00:05:57.950 --rc geninfo_all_blocks=1 00:05:57.950 --rc geninfo_unexecuted_blocks=1 00:05:57.951 00:05:57.951 ' 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.951 05:30:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.951 05:30:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.951 05:30:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.951 05:30:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.951 05:30:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.951 05:30:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.951 05:30:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.951 05:30:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:57.951 05:30:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.951 05:30:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.951 INFO: launching applications... 00:05:57.951 05:30:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.951 05:30:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.951 05:30:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.951 05:30:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.951 05:30:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.951 05:30:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.951 05:30:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.951 05:30:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.951 05:30:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1004693 00:05:57.951 05:30:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.951 Waiting for target to run... 00:05:57.951 05:30:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1004693 /var/tmp/spdk_tgt.sock 00:05:57.951 05:30:45 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1004693 ']' 00:05:57.951 05:30:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.951 05:30:45 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.951 05:30:45 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.951 05:30:45 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.951 05:30:45 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.951 05:30:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.951 [2024-12-10 05:30:45.759272] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:05:57.951 [2024-12-10 05:30:45.759319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1004693 ] 00:05:58.519 [2024-12-10 05:30:46.210797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.519 [2024-12-10 05:30:46.265457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.778 05:30:46 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.778 05:30:46 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:58.778 05:30:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:58.778 00:05:58.778 05:30:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:58.778 INFO: shutting down applications... 00:05:58.778 05:30:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:58.778 05:30:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:58.778 05:30:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.778 05:30:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1004693 ]] 00:05:58.778 05:30:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1004693 00:05:58.778 05:30:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.778 05:30:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.778 05:30:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1004693 00:05:58.778 05:30:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.345 05:30:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.345 05:30:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.345 05:30:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1004693 00:05:59.345 05:30:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.345 05:30:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:59.345 05:30:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.345 05:30:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.345 SPDK target shutdown done 00:05:59.345 05:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:59.345 Success 00:05:59.345 00:05:59.345 real 0m1.578s 00:05:59.345 user 0m1.202s 00:05:59.345 sys 0m0.559s 00:05:59.345 05:30:47 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.345 05:30:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.345 ************************************ 00:05:59.345 END TEST json_config_extra_key 00:05:59.345 ************************************ 00:05:59.345 05:30:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.345 05:30:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.345 05:30:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.345 05:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:59.345 ************************************ 00:05:59.345 START TEST alias_rpc 00:05:59.345 ************************************ 00:05:59.345 05:30:47 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.604 * Looking for test storage... 00:05:59.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.605 05:30:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.605 --rc genhtml_branch_coverage=1 00:05:59.605 --rc genhtml_function_coverage=1 00:05:59.605 --rc genhtml_legend=1 00:05:59.605 --rc geninfo_all_blocks=1 00:05:59.605 --rc geninfo_unexecuted_blocks=1 00:05:59.605 00:05:59.605 ' 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.605 --rc genhtml_branch_coverage=1 00:05:59.605 --rc genhtml_function_coverage=1 00:05:59.605 --rc genhtml_legend=1 00:05:59.605 --rc geninfo_all_blocks=1 00:05:59.605 --rc geninfo_unexecuted_blocks=1 00:05:59.605 00:05:59.605 ' 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.605 --rc genhtml_branch_coverage=1 00:05:59.605 --rc genhtml_function_coverage=1 00:05:59.605 --rc genhtml_legend=1 00:05:59.605 --rc geninfo_all_blocks=1 00:05:59.605 --rc geninfo_unexecuted_blocks=1 00:05:59.605 00:05:59.605 ' 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.605 --rc genhtml_branch_coverage=1 00:05:59.605 --rc genhtml_function_coverage=1 00:05:59.605 --rc genhtml_legend=1 00:05:59.605 --rc geninfo_all_blocks=1 00:05:59.605 --rc geninfo_unexecuted_blocks=1 00:05:59.605 00:05:59.605 ' 00:05:59.605 05:30:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.605 05:30:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1004977 00:05:59.605 05:30:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1004977 00:05:59.605 05:30:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1004977 ']' 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.605 05:30:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.605 [2024-12-10 05:30:47.393955] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:05:59.605 [2024-12-10 05:30:47.394004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1004977 ] 00:05:59.605 [2024-12-10 05:30:47.465129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.864 [2024-12-10 05:30:47.507279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.864 05:30:47 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.864 05:30:47 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.864 05:30:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:00.123 05:30:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1004977 00:06:00.123 05:30:47 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1004977 ']' 00:06:00.123 05:30:47 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1004977 00:06:00.123 05:30:47 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:00.123 05:30:47 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.123 05:30:47 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1004977 00:06:00.123 05:30:47 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.123 05:30:47 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.123 05:30:47 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1004977' 00:06:00.123 killing process with pid 1004977 00:06:00.123 05:30:47 alias_rpc -- common/autotest_common.sh@973 -- # kill 1004977 00:06:00.123 05:30:47 alias_rpc -- common/autotest_common.sh@978 -- # wait 1004977 00:06:00.691 00:06:00.691 real 0m1.129s 00:06:00.691 user 0m1.151s 00:06:00.691 sys 0m0.411s 00:06:00.691 05:30:48 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.691 05:30:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.691 ************************************ 00:06:00.691 END TEST alias_rpc 00:06:00.691 ************************************ 00:06:00.691 05:30:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:00.691 05:30:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.691 05:30:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.691 05:30:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.691 05:30:48 -- common/autotest_common.sh@10 -- # set +x 00:06:00.691 ************************************ 00:06:00.691 START TEST spdkcli_tcp 00:06:00.691 ************************************ 00:06:00.691 05:30:48 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.691 * Looking for test storage... 00:06:00.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:00.691 05:30:48 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.691 05:30:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.691 05:30:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.691 05:30:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.691 05:30:48 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:00.691 05:30:48 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.691 05:30:48 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.691 --rc genhtml_branch_coverage=1 00:06:00.691 --rc genhtml_function_coverage=1 00:06:00.691 --rc genhtml_legend=1 00:06:00.691 --rc geninfo_all_blocks=1 00:06:00.691 --rc geninfo_unexecuted_blocks=1 00:06:00.691 00:06:00.691 ' 00:06:00.691 05:30:48 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.691 --rc genhtml_branch_coverage=1 00:06:00.692 --rc genhtml_function_coverage=1 00:06:00.692 --rc genhtml_legend=1 00:06:00.692 --rc geninfo_all_blocks=1 00:06:00.692 --rc geninfo_unexecuted_blocks=1 00:06:00.692 00:06:00.692 ' 00:06:00.692 05:30:48 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.692 --rc genhtml_branch_coverage=1 00:06:00.692 --rc genhtml_function_coverage=1 00:06:00.692 --rc genhtml_legend=1 00:06:00.692 --rc geninfo_all_blocks=1 00:06:00.692 --rc geninfo_unexecuted_blocks=1 00:06:00.692 00:06:00.692 ' 00:06:00.692 05:30:48 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.692 --rc genhtml_branch_coverage=1 00:06:00.692 --rc genhtml_function_coverage=1 00:06:00.692 --rc genhtml_legend=1 00:06:00.692 --rc geninfo_all_blocks=1 00:06:00.692 --rc geninfo_unexecuted_blocks=1 00:06:00.692 00:06:00.692 ' 00:06:00.692 05:30:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:00.692 05:30:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:00.692 05:30:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:00.692 05:30:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:00.692 05:30:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:00.692 05:30:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:00.692 05:30:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:00.692 05:30:48 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.692 05:30:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.692 05:30:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1005258 00:06:00.692 05:30:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1005258 00:06:00.692 05:30:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:00.692 05:30:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1005258 ']' 00:06:00.692 05:30:48 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.692 05:30:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.692 05:30:48 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.692 05:30:48 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.692 05:30:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.951 [2024-12-10 05:30:48.596123] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:00.951 [2024-12-10 05:30:48.596178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005258 ] 00:06:00.951 [2024-12-10 05:30:48.655969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.951 [2024-12-10 05:30:48.701185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.951 [2024-12-10 05:30:48.701188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.210 05:30:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.210 05:30:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:01.210 05:30:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1005271 00:06:01.210 05:30:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:01.210 05:30:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:01.210 [ 00:06:01.210 "bdev_malloc_delete", 00:06:01.210 "bdev_malloc_create", 00:06:01.210 "bdev_null_resize", 00:06:01.210 "bdev_null_delete", 00:06:01.210 "bdev_null_create", 00:06:01.210 "bdev_nvme_cuse_unregister", 00:06:01.210 "bdev_nvme_cuse_register", 00:06:01.210 "bdev_opal_new_user", 00:06:01.210 "bdev_opal_set_lock_state", 00:06:01.210 "bdev_opal_delete", 00:06:01.210 "bdev_opal_get_info", 00:06:01.210 "bdev_opal_create", 00:06:01.210 "bdev_nvme_opal_revert", 00:06:01.210 "bdev_nvme_opal_init", 00:06:01.210 "bdev_nvme_send_cmd", 00:06:01.210 "bdev_nvme_set_keys", 00:06:01.210 "bdev_nvme_get_path_iostat", 00:06:01.210 "bdev_nvme_get_mdns_discovery_info", 00:06:01.210 "bdev_nvme_stop_mdns_discovery", 00:06:01.210 "bdev_nvme_start_mdns_discovery", 00:06:01.210 "bdev_nvme_set_multipath_policy", 00:06:01.210 "bdev_nvme_set_preferred_path", 00:06:01.210 "bdev_nvme_get_io_paths", 00:06:01.210 "bdev_nvme_remove_error_injection", 00:06:01.210 "bdev_nvme_add_error_injection", 00:06:01.210 "bdev_nvme_get_discovery_info", 00:06:01.210 "bdev_nvme_stop_discovery", 00:06:01.210 "bdev_nvme_start_discovery", 00:06:01.210 "bdev_nvme_get_controller_health_info", 00:06:01.210 "bdev_nvme_disable_controller", 00:06:01.210 "bdev_nvme_enable_controller", 00:06:01.210 "bdev_nvme_reset_controller", 00:06:01.210 "bdev_nvme_get_transport_statistics", 00:06:01.210 "bdev_nvme_apply_firmware", 00:06:01.210 "bdev_nvme_detach_controller", 00:06:01.210 "bdev_nvme_get_controllers", 00:06:01.210 "bdev_nvme_attach_controller", 00:06:01.210 "bdev_nvme_set_hotplug", 00:06:01.210 "bdev_nvme_set_options", 00:06:01.210 "bdev_passthru_delete", 00:06:01.210 "bdev_passthru_create", 00:06:01.210 "bdev_lvol_set_parent_bdev", 00:06:01.210 "bdev_lvol_set_parent", 00:06:01.210 "bdev_lvol_check_shallow_copy", 00:06:01.210 "bdev_lvol_start_shallow_copy", 00:06:01.210 "bdev_lvol_grow_lvstore", 00:06:01.210 "bdev_lvol_get_lvols", 00:06:01.210 "bdev_lvol_get_lvstores", 00:06:01.210 "bdev_lvol_delete", 00:06:01.210 "bdev_lvol_set_read_only", 00:06:01.210 "bdev_lvol_resize", 00:06:01.210 "bdev_lvol_decouple_parent", 00:06:01.210 "bdev_lvol_inflate", 00:06:01.210 "bdev_lvol_rename", 00:06:01.210 "bdev_lvol_clone_bdev", 00:06:01.210 "bdev_lvol_clone", 00:06:01.210 "bdev_lvol_snapshot", 00:06:01.210 "bdev_lvol_create", 00:06:01.210 "bdev_lvol_delete_lvstore", 00:06:01.210 "bdev_lvol_rename_lvstore", 00:06:01.210 "bdev_lvol_create_lvstore", 00:06:01.210 "bdev_raid_set_options", 00:06:01.210 "bdev_raid_remove_base_bdev", 00:06:01.210 "bdev_raid_add_base_bdev", 00:06:01.210 "bdev_raid_delete", 00:06:01.210 "bdev_raid_create", 00:06:01.210 "bdev_raid_get_bdevs", 00:06:01.210 "bdev_error_inject_error", 00:06:01.210 "bdev_error_delete", 00:06:01.210 "bdev_error_create", 00:06:01.210 "bdev_split_delete", 00:06:01.210 "bdev_split_create", 00:06:01.210 "bdev_delay_delete", 00:06:01.210 "bdev_delay_create", 00:06:01.210 "bdev_delay_update_latency", 00:06:01.210 "bdev_zone_block_delete", 00:06:01.210 "bdev_zone_block_create", 00:06:01.210 "blobfs_create", 00:06:01.210 "blobfs_detect", 00:06:01.210 "blobfs_set_cache_size", 00:06:01.210 "bdev_aio_delete", 00:06:01.210 "bdev_aio_rescan", 00:06:01.210 "bdev_aio_create", 00:06:01.210 "bdev_ftl_set_property", 00:06:01.210 "bdev_ftl_get_properties", 00:06:01.210 "bdev_ftl_get_stats", 00:06:01.210 "bdev_ftl_unmap", 00:06:01.210 "bdev_ftl_unload", 00:06:01.210 "bdev_ftl_delete", 00:06:01.210 "bdev_ftl_load", 00:06:01.210 "bdev_ftl_create", 00:06:01.210 "bdev_virtio_attach_controller", 00:06:01.210 "bdev_virtio_scsi_get_devices", 00:06:01.210 "bdev_virtio_detach_controller", 00:06:01.210 "bdev_virtio_blk_set_hotplug", 00:06:01.210 "bdev_iscsi_delete", 00:06:01.210 "bdev_iscsi_create", 00:06:01.210 "bdev_iscsi_set_options", 00:06:01.210 "accel_error_inject_error", 00:06:01.210 "ioat_scan_accel_module", 00:06:01.211 "dsa_scan_accel_module", 00:06:01.211 "iaa_scan_accel_module", 00:06:01.211 "vfu_virtio_create_fs_endpoint", 00:06:01.211 "vfu_virtio_create_scsi_endpoint", 00:06:01.211 "vfu_virtio_scsi_remove_target", 00:06:01.211 "vfu_virtio_scsi_add_target", 00:06:01.211 "vfu_virtio_create_blk_endpoint", 00:06:01.211 "vfu_virtio_delete_endpoint", 00:06:01.211 "keyring_file_remove_key", 00:06:01.211 "keyring_file_add_key", 00:06:01.211 "keyring_linux_set_options", 00:06:01.211 "fsdev_aio_delete", 00:06:01.211 "fsdev_aio_create", 00:06:01.211 "iscsi_get_histogram", 00:06:01.211 "iscsi_enable_histogram", 00:06:01.211 "iscsi_set_options", 00:06:01.211 "iscsi_get_auth_groups", 00:06:01.211 "iscsi_auth_group_remove_secret", 00:06:01.211 "iscsi_auth_group_add_secret", 00:06:01.211 "iscsi_delete_auth_group", 00:06:01.211 "iscsi_create_auth_group", 00:06:01.211 "iscsi_set_discovery_auth", 00:06:01.211 "iscsi_get_options", 00:06:01.211 "iscsi_target_node_request_logout", 00:06:01.211 "iscsi_target_node_set_redirect", 00:06:01.211 "iscsi_target_node_set_auth", 00:06:01.211 "iscsi_target_node_add_lun", 00:06:01.211 "iscsi_get_stats", 00:06:01.211 "iscsi_get_connections", 00:06:01.211 "iscsi_portal_group_set_auth", 00:06:01.211 "iscsi_start_portal_group", 00:06:01.211 "iscsi_delete_portal_group", 00:06:01.211 "iscsi_create_portal_group", 00:06:01.211 "iscsi_get_portal_groups", 00:06:01.211 "iscsi_delete_target_node", 00:06:01.211 "iscsi_target_node_remove_pg_ig_maps", 00:06:01.211 "iscsi_target_node_add_pg_ig_maps", 00:06:01.211 "iscsi_create_target_node", 00:06:01.211 "iscsi_get_target_nodes", 00:06:01.211 "iscsi_delete_initiator_group", 00:06:01.211 "iscsi_initiator_group_remove_initiators", 00:06:01.211 "iscsi_initiator_group_add_initiators", 00:06:01.211 "iscsi_create_initiator_group", 00:06:01.211 "iscsi_get_initiator_groups", 00:06:01.211 "nvmf_set_crdt", 00:06:01.211 "nvmf_set_config", 00:06:01.211 "nvmf_set_max_subsystems", 00:06:01.211 "nvmf_stop_mdns_prr", 00:06:01.211 "nvmf_publish_mdns_prr", 00:06:01.211 "nvmf_subsystem_get_listeners", 00:06:01.211 "nvmf_subsystem_get_qpairs", 00:06:01.211 "nvmf_subsystem_get_controllers", 00:06:01.211 "nvmf_get_stats", 00:06:01.211 "nvmf_get_transports", 00:06:01.211 "nvmf_create_transport", 00:06:01.211 "nvmf_get_targets", 00:06:01.211 "nvmf_delete_target", 00:06:01.211 "nvmf_create_target", 00:06:01.211 "nvmf_subsystem_allow_any_host", 00:06:01.211 "nvmf_subsystem_set_keys", 00:06:01.211 "nvmf_subsystem_remove_host", 00:06:01.211 "nvmf_subsystem_add_host", 00:06:01.211 "nvmf_ns_remove_host", 00:06:01.211 "nvmf_ns_add_host", 00:06:01.211 "nvmf_subsystem_remove_ns", 00:06:01.211 "nvmf_subsystem_set_ns_ana_group", 00:06:01.211 "nvmf_subsystem_add_ns", 00:06:01.211 "nvmf_subsystem_listener_set_ana_state", 00:06:01.211 "nvmf_discovery_get_referrals", 00:06:01.211 "nvmf_discovery_remove_referral", 00:06:01.211 "nvmf_discovery_add_referral", 00:06:01.211 "nvmf_subsystem_remove_listener", 00:06:01.211 "nvmf_subsystem_add_listener", 00:06:01.211 "nvmf_delete_subsystem", 00:06:01.211 "nvmf_create_subsystem", 00:06:01.211 "nvmf_get_subsystems", 00:06:01.211 "env_dpdk_get_mem_stats", 00:06:01.211 "nbd_get_disks", 00:06:01.211 "nbd_stop_disk", 00:06:01.211 "nbd_start_disk", 00:06:01.211 "ublk_recover_disk", 00:06:01.211 "ublk_get_disks", 00:06:01.211 "ublk_stop_disk", 00:06:01.211 "ublk_start_disk", 00:06:01.211 "ublk_destroy_target", 00:06:01.211 "ublk_create_target", 00:06:01.211 "virtio_blk_create_transport", 00:06:01.211 "virtio_blk_get_transports", 00:06:01.211 "vhost_controller_set_coalescing", 00:06:01.211 "vhost_get_controllers", 00:06:01.211 "vhost_delete_controller", 00:06:01.211 "vhost_create_blk_controller", 00:06:01.211 "vhost_scsi_controller_remove_target", 00:06:01.211 "vhost_scsi_controller_add_target", 00:06:01.211 "vhost_start_scsi_controller", 00:06:01.211 "vhost_create_scsi_controller", 00:06:01.211 "thread_set_cpumask", 00:06:01.211 "scheduler_set_options", 00:06:01.211 "framework_get_governor", 00:06:01.211 "framework_get_scheduler", 00:06:01.211 "framework_set_scheduler", 00:06:01.211 "framework_get_reactors", 00:06:01.211 "thread_get_io_channels", 00:06:01.211 "thread_get_pollers", 00:06:01.211 "thread_get_stats", 00:06:01.211 "framework_monitor_context_switch", 00:06:01.211 "spdk_kill_instance", 00:06:01.211 "log_enable_timestamps", 00:06:01.211 "log_get_flags", 00:06:01.211 "log_clear_flag", 00:06:01.211 "log_set_flag", 00:06:01.211 "log_get_level", 00:06:01.211 "log_set_level", 00:06:01.211 "log_get_print_level", 00:06:01.211 "log_set_print_level", 00:06:01.211 "framework_enable_cpumask_locks", 00:06:01.211 "framework_disable_cpumask_locks", 00:06:01.211 "framework_wait_init", 00:06:01.211 "framework_start_init", 00:06:01.211 "scsi_get_devices", 00:06:01.211 "bdev_get_histogram", 00:06:01.211 "bdev_enable_histogram", 00:06:01.211 "bdev_set_qos_limit", 00:06:01.211 "bdev_set_qd_sampling_period", 00:06:01.211 "bdev_get_bdevs", 00:06:01.211 "bdev_reset_iostat", 00:06:01.211 "bdev_get_iostat", 00:06:01.211 "bdev_examine", 00:06:01.211 "bdev_wait_for_examine", 00:06:01.211 "bdev_set_options", 00:06:01.211 "accel_get_stats", 00:06:01.211 "accel_set_options", 00:06:01.211 "accel_set_driver", 00:06:01.211 "accel_crypto_key_destroy", 00:06:01.211 "accel_crypto_keys_get", 00:06:01.211 "accel_crypto_key_create", 00:06:01.211 "accel_assign_opc", 00:06:01.211 "accel_get_module_info", 00:06:01.211 "accel_get_opc_assignments", 00:06:01.211 "vmd_rescan", 00:06:01.211 "vmd_remove_device", 00:06:01.211 "vmd_enable", 00:06:01.211 "sock_get_default_impl", 00:06:01.211 "sock_set_default_impl", 00:06:01.211 "sock_impl_set_options", 00:06:01.211 "sock_impl_get_options", 00:06:01.211 "iobuf_get_stats", 00:06:01.211 "iobuf_set_options", 00:06:01.211 "keyring_get_keys", 00:06:01.211 "vfu_tgt_set_base_path", 00:06:01.211 "framework_get_pci_devices", 00:06:01.211 "framework_get_config", 00:06:01.211 "framework_get_subsystems", 00:06:01.211 "fsdev_set_opts", 00:06:01.211 "fsdev_get_opts", 00:06:01.211 "trace_get_info", 00:06:01.211 "trace_get_tpoint_group_mask", 00:06:01.211 "trace_disable_tpoint_group", 00:06:01.211 "trace_enable_tpoint_group", 00:06:01.211 "trace_clear_tpoint_mask", 00:06:01.211 "trace_set_tpoint_mask", 00:06:01.211 "notify_get_notifications", 00:06:01.211 "notify_get_types", 00:06:01.211 "spdk_get_version", 00:06:01.211 "rpc_get_methods" 00:06:01.211 ] 00:06:01.470 05:30:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.470 05:30:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:01.470 05:30:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1005258 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1005258 ']' 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1005258 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1005258 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1005258' 00:06:01.470 killing process with pid 1005258 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1005258 00:06:01.470 05:30:49 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1005258 00:06:01.729 00:06:01.729 real 0m1.123s 00:06:01.729 user 0m1.921s 00:06:01.729 sys 0m0.439s 00:06:01.729 05:30:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.729 05:30:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.729 ************************************ 00:06:01.729 END TEST spdkcli_tcp 00:06:01.729 ************************************ 00:06:01.729 05:30:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.729 05:30:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.729 05:30:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.729 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:06:01.729 ************************************ 00:06:01.729 START TEST dpdk_mem_utility 00:06:01.729 ************************************ 00:06:01.729 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.988 * Looking for test storage... 00:06:01.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.988 05:30:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:01.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.988 --rc genhtml_branch_coverage=1 00:06:01.988 --rc genhtml_function_coverage=1 00:06:01.988 --rc genhtml_legend=1 00:06:01.988 --rc geninfo_all_blocks=1 00:06:01.988 --rc geninfo_unexecuted_blocks=1 00:06:01.988 00:06:01.988 ' 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:01.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.988 --rc genhtml_branch_coverage=1 00:06:01.988 --rc genhtml_function_coverage=1 00:06:01.988 --rc genhtml_legend=1 00:06:01.988 --rc geninfo_all_blocks=1 00:06:01.988 --rc geninfo_unexecuted_blocks=1 00:06:01.988 00:06:01.988 ' 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:01.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.988 --rc genhtml_branch_coverage=1 00:06:01.988 --rc genhtml_function_coverage=1 00:06:01.988 --rc genhtml_legend=1 00:06:01.988 --rc geninfo_all_blocks=1 00:06:01.988 --rc geninfo_unexecuted_blocks=1 00:06:01.988 00:06:01.988 ' 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:01.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.988 --rc genhtml_branch_coverage=1 00:06:01.988 --rc genhtml_function_coverage=1 00:06:01.988 --rc genhtml_legend=1 00:06:01.988 --rc geninfo_all_blocks=1 00:06:01.988 --rc geninfo_unexecuted_blocks=1 00:06:01.988 00:06:01.988 ' 00:06:01.988 05:30:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.988 05:30:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1005560 00:06:01.988 05:30:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1005560 00:06:01.988 05:30:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1005560 ']' 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.988 05:30:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.989 [2024-12-10 05:30:49.779853] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:01.989 [2024-12-10 05:30:49.779901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005560 ] 00:06:01.989 [2024-12-10 05:30:49.853919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.248 [2024-12-10 05:30:49.895247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.248 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.248 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:02.248 05:30:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:02.248 05:30:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:02.248 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.248 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.248 { 00:06:02.248 "filename": "/tmp/spdk_mem_dump.txt" 00:06:02.248 } 00:06:02.248 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.248 05:30:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:02.507 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:02.507 1 heaps totaling size 818.000000 MiB 00:06:02.507 size: 818.000000 MiB heap id: 0 00:06:02.507 end heaps---------- 00:06:02.507 9 mempools totaling size 603.782043 MiB 00:06:02.507 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:02.507 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:02.507 size: 100.555481 MiB name: bdev_io_1005560 00:06:02.507 size: 50.003479 MiB name: msgpool_1005560 00:06:02.507 size: 36.509338 MiB name: fsdev_io_1005560 00:06:02.507 size: 21.763794 MiB name: PDU_Pool 00:06:02.507 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:02.507 size: 4.133484 MiB name: evtpool_1005560 00:06:02.507 size: 0.026123 MiB name: Session_Pool 00:06:02.507 end mempools------- 00:06:02.507 6 memzones totaling size 4.142822 MiB 00:06:02.507 size: 1.000366 MiB name: RG_ring_0_1005560 00:06:02.507 size: 1.000366 MiB name: RG_ring_1_1005560 00:06:02.507 size: 1.000366 MiB name: RG_ring_4_1005560 00:06:02.507 size: 1.000366 MiB name: RG_ring_5_1005560 00:06:02.507 size: 0.125366 MiB name: RG_ring_2_1005560 00:06:02.507 size: 0.015991 MiB name: RG_ring_3_1005560 00:06:02.507 end memzones------- 00:06:02.507 05:30:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.507 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:02.507 list of free elements. size: 10.852478 MiB 00:06:02.507 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:02.507 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:02.507 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:02.507 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:02.507 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:02.507 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:02.507 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:02.507 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:02.507 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:02.507 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:02.507 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:02.507 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:02.507 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:02.507 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:02.507 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:02.507 list of standard malloc elements. size: 199.218628 MiB 00:06:02.507 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:02.507 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:02.507 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:02.507 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:02.507 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:02.507 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:02.507 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:02.507 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:02.507 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:02.507 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:02.507 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:02.507 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:02.507 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:02.507 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:02.507 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:02.507 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:02.507 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:02.507 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:02.507 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:02.507 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:02.507 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:02.507 list of memzone associated elements. size: 607.928894 MiB 00:06:02.507 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:02.507 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.507 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:02.507 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.507 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:02.507 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1005560_0 00:06:02.507 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:02.507 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1005560_0 00:06:02.507 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:02.507 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1005560_0 00:06:02.507 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:02.508 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.508 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:02.508 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.508 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:02.508 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1005560_0 00:06:02.508 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:02.508 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1005560 00:06:02.508 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:02.508 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1005560 00:06:02.508 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:02.508 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.508 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:02.508 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.508 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:02.508 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.508 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:02.508 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.508 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:02.508 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1005560 00:06:02.508 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:02.508 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1005560 00:06:02.508 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:02.508 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1005560 00:06:02.508 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:02.508 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1005560 00:06:02.508 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:02.508 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1005560 00:06:02.508 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:02.508 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1005560 00:06:02.508 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:02.508 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.508 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:02.508 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.508 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:02.508 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.508 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:02.508 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1005560 00:06:02.508 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:02.508 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1005560 00:06:02.508 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:02.508 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.508 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:02.508 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.508 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:02.508 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1005560 00:06:02.508 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:02.508 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.508 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:02.508 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1005560 00:06:02.508 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:02.508 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1005560 00:06:02.508 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:02.508 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1005560 00:06:02.508 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:02.508 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.508 05:30:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.508 05:30:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1005560 00:06:02.508 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1005560 ']' 00:06:02.508 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1005560 00:06:02.508 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:02.508 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.508 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1005560 00:06:02.508 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.508 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.508 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1005560' 00:06:02.508 killing process with pid 1005560 00:06:02.508 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1005560 00:06:02.508 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1005560 00:06:02.767 00:06:02.767 real 0m1.008s 00:06:02.767 user 0m0.930s 00:06:02.767 sys 0m0.417s 00:06:02.767 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.767 05:30:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.767 ************************************ 00:06:02.767 END TEST dpdk_mem_utility 00:06:02.767 ************************************ 00:06:02.767 05:30:50 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:02.767 05:30:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.767 05:30:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.767 05:30:50 -- common/autotest_common.sh@10 -- # set +x 00:06:02.767 ************************************ 00:06:02.767 START TEST event 00:06:02.767 ************************************ 00:06:02.767 05:30:50 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:03.026 * Looking for test storage... 00:06:03.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:03.026 05:30:50 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.026 05:30:50 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.026 05:30:50 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.026 05:30:50 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.026 05:30:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.026 05:30:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.026 05:30:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.026 05:30:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.026 05:30:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.026 05:30:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.026 05:30:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.026 05:30:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.026 05:30:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.026 05:30:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.026 05:30:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.026 05:30:50 event -- scripts/common.sh@344 -- # case "$op" in 00:06:03.026 05:30:50 event -- scripts/common.sh@345 -- # : 1 00:06:03.026 05:30:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.026 05:30:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.026 05:30:50 event -- scripts/common.sh@365 -- # decimal 1 00:06:03.026 05:30:50 event -- scripts/common.sh@353 -- # local d=1 00:06:03.026 05:30:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.026 05:30:50 event -- scripts/common.sh@355 -- # echo 1 00:06:03.026 05:30:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.026 05:30:50 event -- scripts/common.sh@366 -- # decimal 2 00:06:03.026 05:30:50 event -- scripts/common.sh@353 -- # local d=2 00:06:03.026 05:30:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.026 05:30:50 event -- scripts/common.sh@355 -- # echo 2 00:06:03.026 05:30:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.026 05:30:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.026 05:30:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.026 05:30:50 event -- scripts/common.sh@368 -- # return 0 00:06:03.026 05:30:50 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.026 05:30:50 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.026 --rc genhtml_branch_coverage=1 00:06:03.026 --rc genhtml_function_coverage=1 00:06:03.026 --rc genhtml_legend=1 00:06:03.026 --rc geninfo_all_blocks=1 00:06:03.026 --rc geninfo_unexecuted_blocks=1 00:06:03.026 00:06:03.026 ' 00:06:03.026 05:30:50 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.026 --rc genhtml_branch_coverage=1 00:06:03.026 --rc genhtml_function_coverage=1 00:06:03.026 --rc genhtml_legend=1 00:06:03.026 --rc geninfo_all_blocks=1 00:06:03.026 --rc geninfo_unexecuted_blocks=1 00:06:03.026 00:06:03.026 ' 00:06:03.026 05:30:50 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.026 --rc genhtml_branch_coverage=1 00:06:03.026 --rc genhtml_function_coverage=1 00:06:03.026 --rc genhtml_legend=1 00:06:03.026 --rc geninfo_all_blocks=1 00:06:03.026 --rc geninfo_unexecuted_blocks=1 00:06:03.026 00:06:03.026 ' 00:06:03.026 05:30:50 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.026 --rc genhtml_branch_coverage=1 00:06:03.026 --rc genhtml_function_coverage=1 00:06:03.026 --rc genhtml_legend=1 00:06:03.026 --rc geninfo_all_blocks=1 00:06:03.026 --rc geninfo_unexecuted_blocks=1 00:06:03.026 00:06:03.026 ' 00:06:03.026 05:30:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:03.026 05:30:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.026 05:30:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.026 05:30:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:03.026 05:30:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.026 05:30:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.026 ************************************ 00:06:03.026 START TEST event_perf 00:06:03.026 ************************************ 00:06:03.026 05:30:50 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.026 Running I/O for 1 seconds...[2024-12-10 05:30:50.871025] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:03.026 [2024-12-10 05:30:50.871094] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005759 ] 00:06:03.285 [2024-12-10 05:30:50.950137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.285 [2024-12-10 05:30:50.992378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.285 [2024-12-10 05:30:50.992410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.285 [2024-12-10 05:30:50.992515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.285 [2024-12-10 05:30:50.992516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.221 Running I/O for 1 seconds... 00:06:04.221 lcore 0: 206297 00:06:04.221 lcore 1: 206298 00:06:04.221 lcore 2: 206298 00:06:04.221 lcore 3: 206297 00:06:04.221 done. 00:06:04.221 00:06:04.221 real 0m1.182s 00:06:04.221 user 0m4.101s 00:06:04.221 sys 0m0.080s 00:06:04.221 05:30:52 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.221 05:30:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.221 ************************************ 00:06:04.221 END TEST event_perf 00:06:04.221 ************************************ 00:06:04.221 05:30:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.221 05:30:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:04.221 05:30:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.221 05:30:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.221 ************************************ 00:06:04.221 START TEST event_reactor 00:06:04.221 ************************************ 00:06:04.222 05:30:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.480 [2024-12-10 05:30:52.124051] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:04.480 [2024-12-10 05:30:52.124121] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005946 ] 00:06:04.480 [2024-12-10 05:30:52.204428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.480 [2024-12-10 05:30:52.242887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.417 test_start 00:06:05.417 oneshot 00:06:05.417 tick 100 00:06:05.417 tick 100 00:06:05.417 tick 250 00:06:05.417 tick 100 00:06:05.417 tick 100 00:06:05.417 tick 100 00:06:05.417 tick 250 00:06:05.417 tick 500 00:06:05.417 tick 100 00:06:05.417 tick 100 00:06:05.417 tick 250 00:06:05.417 tick 100 00:06:05.417 tick 100 00:06:05.417 test_end 00:06:05.417 00:06:05.417 real 0m1.176s 00:06:05.417 user 0m1.090s 00:06:05.417 sys 0m0.083s 00:06:05.417 05:30:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.417 05:30:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:05.417 ************************************ 00:06:05.417 END TEST event_reactor 00:06:05.417 ************************************ 00:06:05.675 05:30:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.675 05:30:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:05.675 05:30:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.675 05:30:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.675 ************************************ 00:06:05.675 START TEST event_reactor_perf 00:06:05.675 ************************************ 00:06:05.675 05:30:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.675 [2024-12-10 05:30:53.372797] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:05.675 [2024-12-10 05:30:53.372860] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006134 ] 00:06:05.675 [2024-12-10 05:30:53.450336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.675 [2024-12-10 05:30:53.489796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.053 test_start 00:06:07.053 test_end 00:06:07.053 Performance: 517573 events per second 00:06:07.053 00:06:07.053 real 0m1.176s 00:06:07.053 user 0m1.092s 00:06:07.053 sys 0m0.080s 00:06:07.053 05:30:54 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.053 05:30:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.053 ************************************ 00:06:07.053 END TEST event_reactor_perf 00:06:07.053 ************************************ 00:06:07.053 05:30:54 event -- event/event.sh@49 -- # uname -s 00:06:07.053 05:30:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.053 05:30:54 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.053 05:30:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.053 05:30:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.053 05:30:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.053 ************************************ 00:06:07.053 START TEST event_scheduler 00:06:07.053 ************************************ 00:06:07.053 05:30:54 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.053 * Looking for test storage... 00:06:07.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.054 05:30:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.054 --rc genhtml_branch_coverage=1 00:06:07.054 --rc genhtml_function_coverage=1 00:06:07.054 --rc genhtml_legend=1 00:06:07.054 --rc geninfo_all_blocks=1 00:06:07.054 --rc geninfo_unexecuted_blocks=1 00:06:07.054 00:06:07.054 ' 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.054 --rc genhtml_branch_coverage=1 00:06:07.054 --rc genhtml_function_coverage=1 00:06:07.054 --rc genhtml_legend=1 00:06:07.054 --rc geninfo_all_blocks=1 00:06:07.054 --rc geninfo_unexecuted_blocks=1 00:06:07.054 00:06:07.054 ' 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.054 --rc genhtml_branch_coverage=1 00:06:07.054 --rc genhtml_function_coverage=1 00:06:07.054 --rc genhtml_legend=1 00:06:07.054 --rc geninfo_all_blocks=1 00:06:07.054 --rc geninfo_unexecuted_blocks=1 00:06:07.054 00:06:07.054 ' 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.054 --rc genhtml_branch_coverage=1 00:06:07.054 --rc genhtml_function_coverage=1 00:06:07.054 --rc genhtml_legend=1 00:06:07.054 --rc geninfo_all_blocks=1 00:06:07.054 --rc geninfo_unexecuted_blocks=1 00:06:07.054 00:06:07.054 ' 00:06:07.054 05:30:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.054 05:30:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.054 05:30:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1006454 00:06:07.054 05:30:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.054 05:30:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1006454 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1006454 ']' 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.054 05:30:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.054 [2024-12-10 05:30:54.820012] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:07.054 [2024-12-10 05:30:54.820055] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006454 ] 00:06:07.054 [2024-12-10 05:30:54.892758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.054 [2024-12-10 05:30:54.937036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.054 [2024-12-10 05:30:54.937144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.054 [2024-12-10 05:30:54.937257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.054 [2024-12-10 05:30:54.937257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.313 05:30:54 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.313 05:30:54 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:07.313 05:30:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:07.313 05:30:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.313 05:30:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.313 [2024-12-10 05:30:54.985781] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:07.313 [2024-12-10 05:30:54.985798] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:07.313 [2024-12-10 05:30:54.985807] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:07.313 [2024-12-10 05:30:54.985812] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:07.313 [2024-12-10 05:30:54.985817] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:07.313 05:30:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.313 05:30:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:07.313 05:30:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.313 05:30:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.313 [2024-12-10 05:30:55.060169] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:07.313 05:30:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.313 05:30:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:07.313 05:30:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.313 05:30:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.313 05:30:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.313 ************************************ 00:06:07.313 START TEST scheduler_create_thread 00:06:07.313 ************************************ 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.313 2 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.313 3 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.313 4 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.313 5 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.313 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.314 6 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.314 7 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.314 8 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.314 9 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.314 10 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.314 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.882 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.882 05:30:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:07.882 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.882 05:30:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.257 05:30:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.257 05:30:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:09.257 05:30:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:09.257 05:30:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.257 05:30:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.632 05:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.632 00:06:10.632 real 0m3.097s 00:06:10.632 user 0m0.021s 00:06:10.632 sys 0m0.008s 00:06:10.632 05:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.632 05:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.632 ************************************ 00:06:10.632 END TEST scheduler_create_thread 00:06:10.632 ************************************ 00:06:10.632 05:30:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:10.632 05:30:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1006454 00:06:10.632 05:30:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1006454 ']' 00:06:10.632 05:30:58 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1006454 00:06:10.632 05:30:58 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:10.632 05:30:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.632 05:30:58 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1006454 00:06:10.632 05:30:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:10.632 05:30:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:10.632 05:30:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1006454' 00:06:10.632 killing process with pid 1006454 00:06:10.632 05:30:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1006454 00:06:10.632 05:30:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1006454 00:06:10.890 [2024-12-10 05:30:58.575492] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:10.890 00:06:10.890 real 0m4.154s 00:06:10.890 user 0m6.676s 00:06:10.890 sys 0m0.351s 00:06:10.890 05:30:58 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.890 05:30:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.890 ************************************ 00:06:10.890 END TEST event_scheduler 00:06:10.890 ************************************ 00:06:11.150 05:30:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:11.150 05:30:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:11.150 05:30:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.150 05:30:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.150 05:30:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.150 ************************************ 00:06:11.150 START TEST app_repeat 00:06:11.150 ************************************ 00:06:11.150 05:30:58 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1007193 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1007193' 00:06:11.150 Process app_repeat pid: 1007193 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.150 spdk_app_start Round 0 00:06:11.150 05:30:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1007193 /var/tmp/spdk-nbd.sock 00:06:11.150 05:30:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1007193 ']' 00:06:11.150 05:30:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.150 05:30:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.150 05:30:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.150 05:30:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.150 05:30:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.150 [2024-12-10 05:30:58.873869] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:11.150 [2024-12-10 05:30:58.873920] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007193 ] 00:06:11.150 [2024-12-10 05:30:58.947878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.151 [2024-12-10 05:30:58.990359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.151 [2024-12-10 05:30:58.990363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.409 05:30:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.409 05:30:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:11.409 05:30:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.409 Malloc0 00:06:11.409 05:30:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.668 Malloc1 00:06:11.668 05:30:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.668 05:30:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.927 /dev/nbd0 00:06:11.927 05:30:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.927 05:30:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.927 1+0 records in 00:06:11.927 1+0 records out 00:06:11.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225503 s, 18.2 MB/s 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.927 05:30:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:11.927 05:30:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.927 05:30:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.927 05:30:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.187 /dev/nbd1 00:06:12.187 05:30:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.187 05:30:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.187 05:30:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:12.187 05:30:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.187 05:30:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.187 05:30:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.187 05:30:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:12.187 05:30:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.187 05:30:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.187 05:30:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.187 05:30:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.187 1+0 records in 00:06:12.187 1+0 records out 00:06:12.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226732 s, 18.1 MB/s 00:06:12.187 05:30:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.187 05:31:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.187 05:31:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.187 05:31:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.187 05:31:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.187 05:31:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.187 05:31:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.187 05:31:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.187 05:31:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.187 05:31:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.446 { 00:06:12.446 "nbd_device": "/dev/nbd0", 00:06:12.446 "bdev_name": "Malloc0" 00:06:12.446 }, 00:06:12.446 { 00:06:12.446 "nbd_device": "/dev/nbd1", 00:06:12.446 "bdev_name": "Malloc1" 00:06:12.446 } 00:06:12.446 ]' 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.446 { 00:06:12.446 "nbd_device": "/dev/nbd0", 00:06:12.446 "bdev_name": "Malloc0" 00:06:12.446 }, 00:06:12.446 { 00:06:12.446 "nbd_device": "/dev/nbd1", 00:06:12.446 "bdev_name": "Malloc1" 00:06:12.446 } 00:06:12.446 ]' 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.446 /dev/nbd1' 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.446 /dev/nbd1' 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.446 256+0 records in 00:06:12.446 256+0 records out 00:06:12.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010666 s, 98.3 MB/s 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.446 256+0 records in 00:06:12.446 256+0 records out 00:06:12.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136747 s, 76.7 MB/s 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.446 256+0 records in 00:06:12.446 256+0 records out 00:06:12.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147729 s, 71.0 MB/s 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.446 05:31:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.705 05:31:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.705 05:31:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.705 05:31:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.705 05:31:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.705 05:31:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.706 05:31:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.706 05:31:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.706 05:31:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.706 05:31:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.706 05:31:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.964 05:31:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.964 05:31:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.964 05:31:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.964 05:31:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.964 05:31:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.964 05:31:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.964 05:31:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.964 05:31:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.964 05:31:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.964 05:31:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.964 05:31:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.223 05:31:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.223 05:31:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.223 05:31:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.223 05:31:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.223 05:31:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.223 05:31:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.223 05:31:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.223 05:31:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.223 05:31:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.223 05:31:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.223 05:31:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.223 05:31:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.223 05:31:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.482 05:31:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.740 [2024-12-10 05:31:01.373794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.740 [2024-12-10 05:31:01.410384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.740 [2024-12-10 05:31:01.410385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.740 [2024-12-10 05:31:01.450670] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.740 [2024-12-10 05:31:01.450709] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.026 05:31:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.026 05:31:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:17.026 spdk_app_start Round 1 00:06:17.026 05:31:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1007193 /var/tmp/spdk-nbd.sock 00:06:17.026 05:31:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1007193 ']' 00:06:17.026 05:31:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.026 05:31:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.026 05:31:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.026 05:31:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.026 05:31:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.026 05:31:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.026 05:31:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.026 05:31:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.026 Malloc0 00:06:17.026 05:31:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.026 Malloc1 00:06:17.026 05:31:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.026 05:31:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.285 /dev/nbd0 00:06:17.285 05:31:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.285 05:31:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.285 1+0 records in 00:06:17.285 1+0 records out 00:06:17.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230922 s, 17.7 MB/s 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.285 05:31:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:17.285 05:31:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.285 05:31:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.285 05:31:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.543 /dev/nbd1 00:06:17.543 05:31:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.543 05:31:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.543 1+0 records in 00:06:17.543 1+0 records out 00:06:17.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224386 s, 18.3 MB/s 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.543 05:31:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:17.543 05:31:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.543 05:31:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.543 05:31:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.543 05:31:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.543 05:31:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.802 { 00:06:17.802 "nbd_device": "/dev/nbd0", 00:06:17.802 "bdev_name": "Malloc0" 00:06:17.802 }, 00:06:17.802 { 00:06:17.802 "nbd_device": "/dev/nbd1", 00:06:17.802 "bdev_name": "Malloc1" 00:06:17.802 } 00:06:17.802 ]' 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.802 { 00:06:17.802 "nbd_device": "/dev/nbd0", 00:06:17.802 "bdev_name": "Malloc0" 00:06:17.802 }, 00:06:17.802 { 00:06:17.802 "nbd_device": "/dev/nbd1", 00:06:17.802 "bdev_name": "Malloc1" 00:06:17.802 } 00:06:17.802 ]' 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.802 /dev/nbd1' 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.802 /dev/nbd1' 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.802 256+0 records in 00:06:17.802 256+0 records out 00:06:17.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100169 s, 105 MB/s 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.802 256+0 records in 00:06:17.802 256+0 records out 00:06:17.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135955 s, 77.1 MB/s 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.802 256+0 records in 00:06:17.802 256+0 records out 00:06:17.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146317 s, 71.7 MB/s 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.802 05:31:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.061 05:31:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.061 05:31:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.061 05:31:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.061 05:31:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.061 05:31:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.061 05:31:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.061 05:31:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.061 05:31:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.061 05:31:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.061 05:31:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.320 05:31:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.320 05:31:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.320 05:31:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.320 05:31:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.320 05:31:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.320 05:31:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.320 05:31:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.320 05:31:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.320 05:31:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.320 05:31:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.320 05:31:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.579 05:31:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.579 05:31:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.838 05:31:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.096 [2024-12-10 05:31:06.733579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.096 [2024-12-10 05:31:06.769861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.096 [2024-12-10 05:31:06.769862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.096 [2024-12-10 05:31:06.810339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.096 [2024-12-10 05:31:06.810378] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.381 05:31:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.382 05:31:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:22.382 spdk_app_start Round 2 00:06:22.382 05:31:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1007193 /var/tmp/spdk-nbd.sock 00:06:22.382 05:31:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1007193 ']' 00:06:22.382 05:31:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.382 05:31:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.382 05:31:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.382 05:31:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.382 05:31:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.382 05:31:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.382 05:31:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:22.382 05:31:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.382 Malloc0 00:06:22.382 05:31:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.382 Malloc1 00:06:22.382 05:31:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.382 05:31:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.640 /dev/nbd0 00:06:22.641 05:31:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.641 05:31:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.641 1+0 records in 00:06:22.641 1+0 records out 00:06:22.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193558 s, 21.2 MB/s 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.641 05:31:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.641 05:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.641 05:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.641 05:31:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.931 /dev/nbd1 00:06:22.931 05:31:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.931 05:31:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.931 1+0 records in 00:06:22.931 1+0 records out 00:06:22.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221692 s, 18.5 MB/s 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.931 05:31:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.931 05:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.931 05:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.931 05:31:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.931 05:31:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.931 05:31:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.236 05:31:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.236 { 00:06:23.236 "nbd_device": "/dev/nbd0", 00:06:23.236 "bdev_name": "Malloc0" 00:06:23.236 }, 00:06:23.236 { 00:06:23.236 "nbd_device": "/dev/nbd1", 00:06:23.236 "bdev_name": "Malloc1" 00:06:23.236 } 00:06:23.236 ]' 00:06:23.236 05:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.236 { 00:06:23.236 "nbd_device": "/dev/nbd0", 00:06:23.236 "bdev_name": "Malloc0" 00:06:23.236 }, 00:06:23.236 { 00:06:23.236 "nbd_device": "/dev/nbd1", 00:06:23.236 "bdev_name": "Malloc1" 00:06:23.237 } 00:06:23.237 ]' 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.237 /dev/nbd1' 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.237 /dev/nbd1' 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.237 256+0 records in 00:06:23.237 256+0 records out 00:06:23.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101748 s, 103 MB/s 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.237 256+0 records in 00:06:23.237 256+0 records out 00:06:23.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137849 s, 76.1 MB/s 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.237 256+0 records in 00:06:23.237 256+0 records out 00:06:23.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148662 s, 70.5 MB/s 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.237 05:31:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.237 05:31:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.237 05:31:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.237 05:31:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.237 05:31:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.237 05:31:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.237 05:31:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.237 05:31:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.237 05:31:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.237 05:31:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.237 05:31:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.555 05:31:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.555 05:31:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.555 05:31:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.555 05:31:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.555 05:31:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.555 05:31:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.555 05:31:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.555 05:31:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.555 05:31:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.555 05:31:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.814 05:31:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.814 05:31:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.073 05:31:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.332 [2024-12-10 05:31:12.052335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.332 [2024-12-10 05:31:12.087762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.332 [2024-12-10 05:31:12.087763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.332 [2024-12-10 05:31:12.128265] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.332 [2024-12-10 05:31:12.128302] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.619 05:31:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1007193 /var/tmp/spdk-nbd.sock 00:06:27.619 05:31:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1007193 ']' 00:06:27.619 05:31:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.619 05:31:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.619 05:31:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.619 05:31:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.619 05:31:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:27.619 05:31:15 event.app_repeat -- event/event.sh@39 -- # killprocess 1007193 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1007193 ']' 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1007193 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1007193 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1007193' 00:06:27.619 killing process with pid 1007193 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1007193 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1007193 00:06:27.619 spdk_app_start is called in Round 0. 00:06:27.619 Shutdown signal received, stop current app iteration 00:06:27.619 Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 reinitialization... 00:06:27.619 spdk_app_start is called in Round 1. 00:06:27.619 Shutdown signal received, stop current app iteration 00:06:27.619 Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 reinitialization... 00:06:27.619 spdk_app_start is called in Round 2. 00:06:27.619 Shutdown signal received, stop current app iteration 00:06:27.619 Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 reinitialization... 00:06:27.619 spdk_app_start is called in Round 3. 00:06:27.619 Shutdown signal received, stop current app iteration 00:06:27.619 05:31:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:27.619 05:31:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:27.619 00:06:27.619 real 0m16.474s 00:06:27.619 user 0m36.321s 00:06:27.619 sys 0m2.503s 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.619 05:31:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.619 ************************************ 00:06:27.619 END TEST app_repeat 00:06:27.619 ************************************ 00:06:27.619 05:31:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:27.619 05:31:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:27.619 05:31:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.619 05:31:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.619 05:31:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.619 ************************************ 00:06:27.619 START TEST cpu_locks 00:06:27.619 ************************************ 00:06:27.619 05:31:15 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:27.619 * Looking for test storage... 00:06:27.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:27.619 05:31:15 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:27.619 05:31:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:27.619 05:31:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.879 05:31:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.879 05:31:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:27.879 05:31:15 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.879 05:31:15 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.879 --rc genhtml_branch_coverage=1 00:06:27.879 --rc genhtml_function_coverage=1 00:06:27.879 --rc genhtml_legend=1 00:06:27.879 --rc geninfo_all_blocks=1 00:06:27.879 --rc geninfo_unexecuted_blocks=1 00:06:27.879 00:06:27.879 ' 00:06:27.879 05:31:15 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.879 --rc genhtml_branch_coverage=1 00:06:27.879 --rc genhtml_function_coverage=1 00:06:27.879 --rc genhtml_legend=1 00:06:27.879 --rc geninfo_all_blocks=1 00:06:27.879 --rc geninfo_unexecuted_blocks=1 00:06:27.879 00:06:27.879 ' 00:06:27.879 05:31:15 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.879 --rc genhtml_branch_coverage=1 00:06:27.879 --rc genhtml_function_coverage=1 00:06:27.879 --rc genhtml_legend=1 00:06:27.879 --rc geninfo_all_blocks=1 00:06:27.879 --rc geninfo_unexecuted_blocks=1 00:06:27.879 00:06:27.879 ' 00:06:27.879 05:31:15 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.879 --rc genhtml_branch_coverage=1 00:06:27.879 --rc genhtml_function_coverage=1 00:06:27.879 --rc genhtml_legend=1 00:06:27.879 --rc geninfo_all_blocks=1 00:06:27.879 --rc geninfo_unexecuted_blocks=1 00:06:27.879 00:06:27.879 ' 00:06:27.879 05:31:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:27.879 05:31:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:27.879 05:31:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:27.879 05:31:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:27.879 05:31:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.879 05:31:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.879 05:31:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.879 ************************************ 00:06:27.879 START TEST default_locks 00:06:27.879 ************************************ 00:06:27.879 05:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:27.879 05:31:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1010267 00:06:27.879 05:31:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1010267 00:06:27.879 05:31:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.879 05:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1010267 ']' 00:06:27.880 05:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.880 05:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.880 05:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.880 05:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.880 05:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.880 [2024-12-10 05:31:15.647063] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:27.880 [2024-12-10 05:31:15.647104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010267 ] 00:06:27.880 [2024-12-10 05:31:15.719780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.880 [2024-12-10 05:31:15.759982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.139 05:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.139 05:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:28.139 05:31:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1010267 00:06:28.139 05:31:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1010267 00:06:28.139 05:31:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.706 lslocks: write error 00:06:28.706 05:31:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1010267 00:06:28.706 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1010267 ']' 00:06:28.706 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1010267 00:06:28.706 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:28.706 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.707 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010267 00:06:28.707 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.707 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.707 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010267' 00:06:28.707 killing process with pid 1010267 00:06:28.707 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1010267 00:06:28.707 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1010267 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1010267 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1010267 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1010267 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1010267 ']' 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.966 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1010267) - No such process 00:06:28.967 ERROR: process (pid: 1010267) is no longer running 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.967 00:06:28.967 real 0m1.100s 00:06:28.967 user 0m1.065s 00:06:28.967 sys 0m0.497s 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.967 05:31:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.967 ************************************ 00:06:28.967 END TEST default_locks 00:06:28.967 ************************************ 00:06:28.967 05:31:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:28.967 05:31:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.967 05:31:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.967 05:31:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.967 ************************************ 00:06:28.967 START TEST default_locks_via_rpc 00:06:28.967 ************************************ 00:06:28.967 05:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:28.967 05:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1010520 00:06:28.967 05:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.967 05:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1010520 00:06:28.967 05:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1010520 ']' 00:06:28.967 05:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.967 05:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.967 05:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.967 05:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.967 05:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.967 [2024-12-10 05:31:16.805108] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:28.967 [2024-12-10 05:31:16.805144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010520 ] 00:06:29.226 [2024-12-10 05:31:16.877821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.226 [2024-12-10 05:31:16.918224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1010520 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1010520 00:06:29.485 05:31:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.744 05:31:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1010520 00:06:29.744 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1010520 ']' 00:06:29.744 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1010520 00:06:29.744 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:29.744 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.744 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010520 00:06:29.744 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.744 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.744 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010520' 00:06:29.744 killing process with pid 1010520 00:06:29.744 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1010520 00:06:29.744 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1010520 00:06:30.003 00:06:30.003 real 0m1.021s 00:06:30.003 user 0m0.976s 00:06:30.003 sys 0m0.474s 00:06:30.003 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.003 05:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.003 ************************************ 00:06:30.003 END TEST default_locks_via_rpc 00:06:30.003 ************************************ 00:06:30.003 05:31:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:30.003 05:31:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.003 05:31:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.003 05:31:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.003 ************************************ 00:06:30.003 START TEST non_locking_app_on_locked_coremask 00:06:30.003 ************************************ 00:06:30.003 05:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:30.003 05:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1010764 00:06:30.003 05:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1010764 /var/tmp/spdk.sock 00:06:30.003 05:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.003 05:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1010764 ']' 00:06:30.003 05:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.003 05:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.003 05:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.003 05:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.003 05:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.262 [2024-12-10 05:31:17.905473] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:30.262 [2024-12-10 05:31:17.905514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010764 ] 00:06:30.262 [2024-12-10 05:31:17.979141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.262 [2024-12-10 05:31:18.019384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.521 05:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.521 05:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.521 05:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1010777 00:06:30.521 05:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1010777 /var/tmp/spdk2.sock 00:06:30.521 05:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:30.521 05:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1010777 ']' 00:06:30.521 05:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.521 05:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.521 05:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.521 05:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.521 05:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.521 [2024-12-10 05:31:18.286998] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:30.521 [2024-12-10 05:31:18.287045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010777 ] 00:06:30.521 [2024-12-10 05:31:18.372485] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.521 [2024-12-10 05:31:18.372512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.779 [2024-12-10 05:31:18.459351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.346 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.346 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.346 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1010764 00:06:31.346 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1010764 00:06:31.346 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.914 lslocks: write error 00:06:31.914 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1010764 00:06:31.914 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1010764 ']' 00:06:31.914 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1010764 00:06:31.914 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.914 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.914 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010764 00:06:31.914 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.914 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.914 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010764' 00:06:31.914 killing process with pid 1010764 00:06:31.914 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1010764 00:06:31.914 05:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1010764 00:06:32.482 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1010777 00:06:32.482 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1010777 ']' 00:06:32.482 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1010777 00:06:32.482 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.482 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.482 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010777 00:06:32.482 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.482 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.482 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010777' 00:06:32.482 killing process with pid 1010777 00:06:32.482 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1010777 00:06:32.482 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1010777 00:06:33.050 00:06:33.050 real 0m2.815s 00:06:33.050 user 0m2.966s 00:06:33.050 sys 0m0.943s 00:06:33.050 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.050 05:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.050 ************************************ 00:06:33.050 END TEST non_locking_app_on_locked_coremask 00:06:33.050 ************************************ 00:06:33.050 05:31:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:33.050 05:31:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.050 05:31:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.050 05:31:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.050 ************************************ 00:06:33.050 START TEST locking_app_on_unlocked_coremask 00:06:33.050 ************************************ 00:06:33.050 05:31:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:33.050 05:31:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1011260 00:06:33.050 05:31:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1011260 /var/tmp/spdk.sock 00:06:33.050 05:31:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:33.050 05:31:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1011260 ']' 00:06:33.050 05:31:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.050 05:31:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.050 05:31:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.050 05:31:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.050 05:31:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.050 [2024-12-10 05:31:20.792434] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:33.050 [2024-12-10 05:31:20.792477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011260 ] 00:06:33.050 [2024-12-10 05:31:20.866753] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.050 [2024-12-10 05:31:20.866779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.050 [2024-12-10 05:31:20.902547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.309 05:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.309 05:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.309 05:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1011266 00:06:33.309 05:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1011266 /var/tmp/spdk2.sock 00:06:33.309 05:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.309 05:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1011266 ']' 00:06:33.309 05:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.309 05:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.309 05:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.309 05:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.309 05:31:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.309 [2024-12-10 05:31:21.173714] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:33.309 [2024-12-10 05:31:21.173761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011266 ] 00:06:33.567 [2024-12-10 05:31:21.258772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.567 [2024-12-10 05:31:21.338040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.135 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.135 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.135 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1011266 00:06:34.135 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1011266 00:06:34.135 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.702 lslocks: write error 00:06:34.702 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1011260 00:06:34.702 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1011260 ']' 00:06:34.702 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1011260 00:06:34.702 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.702 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.702 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1011260 00:06:34.961 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.961 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.961 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1011260' 00:06:34.961 killing process with pid 1011260 00:06:34.961 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1011260 00:06:34.961 05:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1011260 00:06:35.529 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1011266 00:06:35.529 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1011266 ']' 00:06:35.529 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1011266 00:06:35.529 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.529 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.529 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1011266 00:06:35.529 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.529 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.529 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1011266' 00:06:35.529 killing process with pid 1011266 00:06:35.529 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1011266 00:06:35.529 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1011266 00:06:35.788 00:06:35.788 real 0m2.810s 00:06:35.788 user 0m2.964s 00:06:35.788 sys 0m0.942s 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.788 ************************************ 00:06:35.788 END TEST locking_app_on_unlocked_coremask 00:06:35.788 ************************************ 00:06:35.788 05:31:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.788 05:31:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.788 05:31:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.788 05:31:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.788 ************************************ 00:06:35.788 START TEST locking_app_on_locked_coremask 00:06:35.788 ************************************ 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1011748 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1011748 /var/tmp/spdk.sock 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1011748 ']' 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.788 05:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.788 [2024-12-10 05:31:23.676009] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:35.788 [2024-12-10 05:31:23.676054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011748 ] 00:06:36.048 [2024-12-10 05:31:23.750086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.048 [2024-12-10 05:31:23.786382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1011752 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1011752 /var/tmp/spdk2.sock 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1011752 /var/tmp/spdk2.sock 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1011752 /var/tmp/spdk2.sock 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1011752 ']' 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.307 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.307 [2024-12-10 05:31:24.059099] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:36.307 [2024-12-10 05:31:24.059141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011752 ] 00:06:36.307 [2024-12-10 05:31:24.147337] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1011748 has claimed it. 00:06:36.307 [2024-12-10 05:31:24.147377] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1011752) - No such process 00:06:36.874 ERROR: process (pid: 1011752) is no longer running 00:06:36.874 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.874 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:36.874 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:36.874 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.874 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.874 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.874 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1011748 00:06:36.874 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1011748 00:06:36.874 05:31:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.441 lslocks: write error 00:06:37.441 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1011748 00:06:37.441 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1011748 ']' 00:06:37.441 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1011748 00:06:37.441 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.441 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.441 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1011748 00:06:37.441 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.442 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.442 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1011748' 00:06:37.442 killing process with pid 1011748 00:06:37.442 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1011748 00:06:37.442 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1011748 00:06:37.700 00:06:37.700 real 0m1.946s 00:06:37.700 user 0m2.078s 00:06:37.700 sys 0m0.645s 00:06:37.701 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.701 05:31:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.701 ************************************ 00:06:37.701 END TEST locking_app_on_locked_coremask 00:06:37.701 ************************************ 00:06:37.960 05:31:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.960 05:31:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.960 05:31:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.960 05:31:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.960 ************************************ 00:06:37.960 START TEST locking_overlapped_coremask 00:06:37.960 ************************************ 00:06:37.960 05:31:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:37.960 05:31:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1012023 00:06:37.960 05:31:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1012023 /var/tmp/spdk.sock 00:06:37.960 05:31:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.960 05:31:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1012023 ']' 00:06:37.960 05:31:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.960 05:31:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.960 05:31:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.960 05:31:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.960 05:31:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.960 [2024-12-10 05:31:25.692256] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:37.960 [2024-12-10 05:31:25.692301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012023 ] 00:06:37.960 [2024-12-10 05:31:25.766018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.960 [2024-12-10 05:31:25.808946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.960 [2024-12-10 05:31:25.809056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.960 [2024-12-10 05:31:25.809057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1012225 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1012225 /var/tmp/spdk2.sock 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1012225 /var/tmp/spdk2.sock 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1012225 /var/tmp/spdk2.sock 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1012225 ']' 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.219 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.219 [2024-12-10 05:31:26.076719] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:38.219 [2024-12-10 05:31:26.076763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012225 ] 00:06:38.478 [2024-12-10 05:31:26.168181] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1012023 has claimed it. 00:06:38.478 [2024-12-10 05:31:26.168221] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1012225) - No such process 00:06:39.046 ERROR: process (pid: 1012225) is no longer running 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1012023 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1012023 ']' 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1012023 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1012023 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1012023' 00:06:39.046 killing process with pid 1012023 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1012023 00:06:39.046 05:31:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1012023 00:06:39.305 00:06:39.305 real 0m1.430s 00:06:39.305 user 0m3.928s 00:06:39.305 sys 0m0.390s 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.305 ************************************ 00:06:39.305 END TEST locking_overlapped_coremask 00:06:39.305 ************************************ 00:06:39.305 05:31:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.305 05:31:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.305 05:31:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.305 05:31:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.305 ************************************ 00:06:39.305 START TEST locking_overlapped_coremask_via_rpc 00:06:39.305 ************************************ 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1012377 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1012377 /var/tmp/spdk.sock 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1012377 ']' 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.305 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.305 [2024-12-10 05:31:27.193411] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:39.306 [2024-12-10 05:31:27.193453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012377 ] 00:06:39.565 [2024-12-10 05:31:27.270009] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.565 [2024-12-10 05:31:27.270033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.565 [2024-12-10 05:31:27.312821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.565 [2024-12-10 05:31:27.312926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.565 [2024-12-10 05:31:27.312927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.823 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.823 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.824 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1012489 00:06:39.824 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1012489 /var/tmp/spdk2.sock 00:06:39.824 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:39.824 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1012489 ']' 00:06:39.824 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.824 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.824 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.824 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.824 05:31:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.824 [2024-12-10 05:31:27.568459] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:39.824 [2024-12-10 05:31:27.568507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012489 ] 00:06:39.824 [2024-12-10 05:31:27.659297] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.824 [2024-12-10 05:31:27.659325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.082 [2024-12-10 05:31:27.748692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.083 [2024-12-10 05:31:27.752210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.083 [2024-12-10 05:31:27.752212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.652 [2024-12-10 05:31:28.417242] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1012377 has claimed it. 00:06:40.652 request: 00:06:40.652 { 00:06:40.652 "method": "framework_enable_cpumask_locks", 00:06:40.652 "req_id": 1 00:06:40.652 } 00:06:40.652 Got JSON-RPC error response 00:06:40.652 response: 00:06:40.652 { 00:06:40.652 "code": -32603, 00:06:40.652 "message": "Failed to claim CPU core: 2" 00:06:40.652 } 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1012377 /var/tmp/spdk.sock 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1012377 ']' 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.652 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.911 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.911 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.911 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1012489 /var/tmp/spdk2.sock 00:06:40.911 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1012489 ']' 00:06:40.911 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.911 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.911 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.911 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.911 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.169 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.169 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.169 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.169 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.169 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.169 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.169 00:06:41.169 real 0m1.719s 00:06:41.169 user 0m0.835s 00:06:41.169 sys 0m0.143s 00:06:41.169 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.169 05:31:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.169 ************************************ 00:06:41.169 END TEST locking_overlapped_coremask_via_rpc 00:06:41.169 ************************************ 00:06:41.169 05:31:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.169 05:31:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1012377 ]] 00:06:41.169 05:31:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1012377 00:06:41.169 05:31:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1012377 ']' 00:06:41.169 05:31:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1012377 00:06:41.169 05:31:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:41.170 05:31:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.170 05:31:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1012377 00:06:41.170 05:31:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.170 05:31:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.170 05:31:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1012377' 00:06:41.170 killing process with pid 1012377 00:06:41.170 05:31:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1012377 00:06:41.170 05:31:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1012377 00:06:41.428 05:31:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1012489 ]] 00:06:41.428 05:31:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1012489 00:06:41.428 05:31:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1012489 ']' 00:06:41.428 05:31:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1012489 00:06:41.428 05:31:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:41.428 05:31:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.428 05:31:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1012489 00:06:41.428 05:31:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:41.428 05:31:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:41.429 05:31:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1012489' 00:06:41.429 killing process with pid 1012489 00:06:41.429 05:31:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1012489 00:06:41.429 05:31:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1012489 00:06:41.997 05:31:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.997 05:31:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:41.997 05:31:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1012377 ]] 00:06:41.997 05:31:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1012377 00:06:41.997 05:31:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1012377 ']' 00:06:41.997 05:31:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1012377 00:06:41.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1012377) - No such process 00:06:41.997 05:31:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1012377 is not found' 00:06:41.997 Process with pid 1012377 is not found 00:06:41.997 05:31:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1012489 ]] 00:06:41.997 05:31:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1012489 00:06:41.997 05:31:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1012489 ']' 00:06:41.997 05:31:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1012489 00:06:41.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1012489) - No such process 00:06:41.997 05:31:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1012489 is not found' 00:06:41.997 Process with pid 1012489 is not found 00:06:41.997 05:31:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.997 00:06:41.997 real 0m14.231s 00:06:41.997 user 0m24.587s 00:06:41.997 sys 0m5.007s 00:06:41.997 05:31:29 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.997 05:31:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.997 ************************************ 00:06:41.997 END TEST cpu_locks 00:06:41.997 ************************************ 00:06:41.997 00:06:41.997 real 0m39.012s 00:06:41.997 user 1m14.142s 00:06:41.997 sys 0m8.489s 00:06:41.997 05:31:29 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.997 05:31:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.997 ************************************ 00:06:41.997 END TEST event 00:06:41.997 ************************************ 00:06:41.997 05:31:29 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:41.997 05:31:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.997 05:31:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.997 05:31:29 -- common/autotest_common.sh@10 -- # set +x 00:06:41.997 ************************************ 00:06:41.997 START TEST thread 00:06:41.997 ************************************ 00:06:41.997 05:31:29 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:41.997 * Looking for test storage... 00:06:41.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:41.997 05:31:29 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.997 05:31:29 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.997 05:31:29 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.997 05:31:29 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.997 05:31:29 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.997 05:31:29 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.997 05:31:29 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.997 05:31:29 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.997 05:31:29 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.997 05:31:29 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.997 05:31:29 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.997 05:31:29 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.997 05:31:29 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.997 05:31:29 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.997 05:31:29 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.997 05:31:29 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:41.997 05:31:29 thread -- scripts/common.sh@345 -- # : 1 00:06:41.997 05:31:29 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.997 05:31:29 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.997 05:31:29 thread -- scripts/common.sh@365 -- # decimal 1 00:06:41.997 05:31:29 thread -- scripts/common.sh@353 -- # local d=1 00:06:41.997 05:31:29 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.997 05:31:29 thread -- scripts/common.sh@355 -- # echo 1 00:06:41.997 05:31:29 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.997 05:31:29 thread -- scripts/common.sh@366 -- # decimal 2 00:06:41.997 05:31:29 thread -- scripts/common.sh@353 -- # local d=2 00:06:42.256 05:31:29 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.256 05:31:29 thread -- scripts/common.sh@355 -- # echo 2 00:06:42.256 05:31:29 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.256 05:31:29 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.256 05:31:29 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.256 05:31:29 thread -- scripts/common.sh@368 -- # return 0 00:06:42.256 05:31:29 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.256 05:31:29 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:42.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.256 --rc genhtml_branch_coverage=1 00:06:42.256 --rc genhtml_function_coverage=1 00:06:42.256 --rc genhtml_legend=1 00:06:42.256 --rc geninfo_all_blocks=1 00:06:42.256 --rc geninfo_unexecuted_blocks=1 00:06:42.256 00:06:42.256 ' 00:06:42.256 05:31:29 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:42.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.256 --rc genhtml_branch_coverage=1 00:06:42.256 --rc genhtml_function_coverage=1 00:06:42.256 --rc genhtml_legend=1 00:06:42.256 --rc geninfo_all_blocks=1 00:06:42.256 --rc geninfo_unexecuted_blocks=1 00:06:42.256 00:06:42.256 ' 00:06:42.256 05:31:29 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:42.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.256 --rc genhtml_branch_coverage=1 00:06:42.256 --rc genhtml_function_coverage=1 00:06:42.256 --rc genhtml_legend=1 00:06:42.256 --rc geninfo_all_blocks=1 00:06:42.256 --rc geninfo_unexecuted_blocks=1 00:06:42.256 00:06:42.256 ' 00:06:42.256 05:31:29 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:42.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.256 --rc genhtml_branch_coverage=1 00:06:42.256 --rc genhtml_function_coverage=1 00:06:42.256 --rc genhtml_legend=1 00:06:42.256 --rc geninfo_all_blocks=1 00:06:42.256 --rc geninfo_unexecuted_blocks=1 00:06:42.256 00:06:42.256 ' 00:06:42.256 05:31:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.256 05:31:29 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:42.256 05:31:29 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.256 05:31:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.256 ************************************ 00:06:42.256 START TEST thread_poller_perf 00:06:42.256 ************************************ 00:06:42.256 05:31:29 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.256 [2024-12-10 05:31:29.947068] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:42.256 [2024-12-10 05:31:29.947136] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012981 ] 00:06:42.256 [2024-12-10 05:31:30.027241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.256 [2024-12-10 05:31:30.072441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.256 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.632 [2024-12-10T04:31:31.528Z] ====================================== 00:06:43.632 [2024-12-10T04:31:31.528Z] busy:2105181076 (cyc) 00:06:43.632 [2024-12-10T04:31:31.528Z] total_run_count: 415000 00:06:43.632 [2024-12-10T04:31:31.528Z] tsc_hz: 2100000000 (cyc) 00:06:43.632 [2024-12-10T04:31:31.528Z] ====================================== 00:06:43.632 [2024-12-10T04:31:31.528Z] poller_cost: 5072 (cyc), 2415 (nsec) 00:06:43.632 00:06:43.632 real 0m1.191s 00:06:43.632 user 0m1.107s 00:06:43.632 sys 0m0.080s 00:06:43.632 05:31:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.632 05:31:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.632 ************************************ 00:06:43.632 END TEST thread_poller_perf 00:06:43.632 ************************************ 00:06:43.632 05:31:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.632 05:31:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:43.632 05:31:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.632 05:31:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.632 ************************************ 00:06:43.632 START TEST thread_poller_perf 00:06:43.632 ************************************ 00:06:43.632 05:31:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.632 [2024-12-10 05:31:31.205422] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:43.632 [2024-12-10 05:31:31.205478] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013171 ] 00:06:43.632 [2024-12-10 05:31:31.280501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.632 [2024-12-10 05:31:31.318462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.632 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:44.569 [2024-12-10T04:31:32.465Z] ====================================== 00:06:44.569 [2024-12-10T04:31:32.465Z] busy:2101204368 (cyc) 00:06:44.569 [2024-12-10T04:31:32.465Z] total_run_count: 5114000 00:06:44.569 [2024-12-10T04:31:32.465Z] tsc_hz: 2100000000 (cyc) 00:06:44.569 [2024-12-10T04:31:32.465Z] ====================================== 00:06:44.569 [2024-12-10T04:31:32.465Z] poller_cost: 410 (cyc), 195 (nsec) 00:06:44.569 00:06:44.569 real 0m1.171s 00:06:44.569 user 0m1.096s 00:06:44.569 sys 0m0.072s 00:06:44.569 05:31:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.569 05:31:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.569 ************************************ 00:06:44.569 END TEST thread_poller_perf 00:06:44.569 ************************************ 00:06:44.569 05:31:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:44.569 00:06:44.569 real 0m2.676s 00:06:44.569 user 0m2.363s 00:06:44.569 sys 0m0.328s 00:06:44.569 05:31:32 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.569 05:31:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.569 ************************************ 00:06:44.569 END TEST thread 00:06:44.569 ************************************ 00:06:44.569 05:31:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:44.569 05:31:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:44.569 05:31:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.569 05:31:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.569 05:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:44.828 ************************************ 00:06:44.828 START TEST app_cmdline 00:06:44.828 ************************************ 00:06:44.828 05:31:32 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:44.828 * Looking for test storage... 00:06:44.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:44.828 05:31:32 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.828 05:31:32 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.828 05:31:32 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.828 05:31:32 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.828 05:31:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:44.828 05:31:32 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.828 05:31:32 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.828 --rc genhtml_branch_coverage=1 00:06:44.828 --rc genhtml_function_coverage=1 00:06:44.828 --rc genhtml_legend=1 00:06:44.828 --rc geninfo_all_blocks=1 00:06:44.828 --rc geninfo_unexecuted_blocks=1 00:06:44.828 00:06:44.828 ' 00:06:44.828 05:31:32 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.828 --rc genhtml_branch_coverage=1 00:06:44.828 --rc genhtml_function_coverage=1 00:06:44.828 --rc genhtml_legend=1 00:06:44.828 --rc geninfo_all_blocks=1 00:06:44.828 --rc geninfo_unexecuted_blocks=1 00:06:44.828 00:06:44.828 ' 00:06:44.828 05:31:32 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.828 --rc genhtml_branch_coverage=1 00:06:44.828 --rc genhtml_function_coverage=1 00:06:44.828 --rc genhtml_legend=1 00:06:44.828 --rc geninfo_all_blocks=1 00:06:44.828 --rc geninfo_unexecuted_blocks=1 00:06:44.828 00:06:44.828 ' 00:06:44.828 05:31:32 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.828 --rc genhtml_branch_coverage=1 00:06:44.828 --rc genhtml_function_coverage=1 00:06:44.828 --rc genhtml_legend=1 00:06:44.828 --rc geninfo_all_blocks=1 00:06:44.828 --rc geninfo_unexecuted_blocks=1 00:06:44.828 00:06:44.828 ' 00:06:44.828 05:31:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:44.828 05:31:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1013521 00:06:44.828 05:31:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1013521 00:06:44.828 05:31:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:44.829 05:31:32 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1013521 ']' 00:06:44.829 05:31:32 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.829 05:31:32 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.829 05:31:32 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.829 05:31:32 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.829 05:31:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.829 [2024-12-10 05:31:32.700087] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:44.829 [2024-12-10 05:31:32.700135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013521 ] 00:06:45.087 [2024-12-10 05:31:32.774787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.088 [2024-12-10 05:31:32.815266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.347 05:31:33 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.347 05:31:33 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:45.347 05:31:33 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:45.347 { 00:06:45.347 "version": "SPDK v25.01-pre git sha1 0edc184ec", 00:06:45.347 "fields": { 00:06:45.347 "major": 25, 00:06:45.347 "minor": 1, 00:06:45.347 "patch": 0, 00:06:45.347 "suffix": "-pre", 00:06:45.347 "commit": "0edc184ec" 00:06:45.347 } 00:06:45.347 } 00:06:45.347 05:31:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:45.347 05:31:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:45.347 05:31:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:45.347 05:31:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:45.347 05:31:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:45.347 05:31:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:45.347 05:31:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:45.347 05:31:33 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.347 05:31:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.347 05:31:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.606 05:31:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:45.606 05:31:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:45.606 05:31:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.606 05:31:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:45.606 05:31:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.607 request: 00:06:45.607 { 00:06:45.607 "method": "env_dpdk_get_mem_stats", 00:06:45.607 "req_id": 1 00:06:45.607 } 00:06:45.607 Got JSON-RPC error response 00:06:45.607 response: 00:06:45.607 { 00:06:45.607 "code": -32601, 00:06:45.607 "message": "Method not found" 00:06:45.607 } 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.607 05:31:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1013521 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1013521 ']' 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1013521 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1013521 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1013521' 00:06:45.607 killing process with pid 1013521 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@973 -- # kill 1013521 00:06:45.607 05:31:33 app_cmdline -- common/autotest_common.sh@978 -- # wait 1013521 00:06:46.174 00:06:46.174 real 0m1.320s 00:06:46.174 user 0m1.521s 00:06:46.174 sys 0m0.449s 00:06:46.174 05:31:33 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.174 05:31:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.174 ************************************ 00:06:46.174 END TEST app_cmdline 00:06:46.174 ************************************ 00:06:46.174 05:31:33 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:46.174 05:31:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.174 05:31:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.174 05:31:33 -- common/autotest_common.sh@10 -- # set +x 00:06:46.174 ************************************ 00:06:46.174 START TEST version 00:06:46.174 ************************************ 00:06:46.174 05:31:33 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:46.174 * Looking for test storage... 00:06:46.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:46.174 05:31:33 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.174 05:31:33 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.174 05:31:33 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.174 05:31:34 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.174 05:31:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.174 05:31:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.174 05:31:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.174 05:31:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.174 05:31:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.174 05:31:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.174 05:31:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.174 05:31:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.174 05:31:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.174 05:31:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.174 05:31:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.174 05:31:34 version -- scripts/common.sh@344 -- # case "$op" in 00:06:46.174 05:31:34 version -- scripts/common.sh@345 -- # : 1 00:06:46.174 05:31:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.174 05:31:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.174 05:31:34 version -- scripts/common.sh@365 -- # decimal 1 00:06:46.174 05:31:34 version -- scripts/common.sh@353 -- # local d=1 00:06:46.174 05:31:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.174 05:31:34 version -- scripts/common.sh@355 -- # echo 1 00:06:46.174 05:31:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.174 05:31:34 version -- scripts/common.sh@366 -- # decimal 2 00:06:46.174 05:31:34 version -- scripts/common.sh@353 -- # local d=2 00:06:46.174 05:31:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.174 05:31:34 version -- scripts/common.sh@355 -- # echo 2 00:06:46.175 05:31:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.175 05:31:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.175 05:31:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.175 05:31:34 version -- scripts/common.sh@368 -- # return 0 00:06:46.175 05:31:34 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.175 05:31:34 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.175 --rc genhtml_branch_coverage=1 00:06:46.175 --rc genhtml_function_coverage=1 00:06:46.175 --rc genhtml_legend=1 00:06:46.175 --rc geninfo_all_blocks=1 00:06:46.175 --rc geninfo_unexecuted_blocks=1 00:06:46.175 00:06:46.175 ' 00:06:46.175 05:31:34 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.175 --rc genhtml_branch_coverage=1 00:06:46.175 --rc genhtml_function_coverage=1 00:06:46.175 --rc genhtml_legend=1 00:06:46.175 --rc geninfo_all_blocks=1 00:06:46.175 --rc geninfo_unexecuted_blocks=1 00:06:46.175 00:06:46.175 ' 00:06:46.175 05:31:34 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.175 --rc genhtml_branch_coverage=1 00:06:46.175 --rc genhtml_function_coverage=1 00:06:46.175 --rc genhtml_legend=1 00:06:46.175 --rc geninfo_all_blocks=1 00:06:46.175 --rc geninfo_unexecuted_blocks=1 00:06:46.175 00:06:46.175 ' 00:06:46.175 05:31:34 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.175 --rc genhtml_branch_coverage=1 00:06:46.175 --rc genhtml_function_coverage=1 00:06:46.175 --rc genhtml_legend=1 00:06:46.175 --rc geninfo_all_blocks=1 00:06:46.175 --rc geninfo_unexecuted_blocks=1 00:06:46.175 00:06:46.175 ' 00:06:46.175 05:31:34 version -- app/version.sh@17 -- # get_header_version major 00:06:46.175 05:31:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.175 05:31:34 version -- app/version.sh@14 -- # cut -f2 00:06:46.175 05:31:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.175 05:31:34 version -- app/version.sh@17 -- # major=25 00:06:46.175 05:31:34 version -- app/version.sh@18 -- # get_header_version minor 00:06:46.175 05:31:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.175 05:31:34 version -- app/version.sh@14 -- # cut -f2 00:06:46.175 05:31:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.175 05:31:34 version -- app/version.sh@18 -- # minor=1 00:06:46.175 05:31:34 version -- app/version.sh@19 -- # get_header_version patch 00:06:46.175 05:31:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.175 05:31:34 version -- app/version.sh@14 -- # cut -f2 00:06:46.175 05:31:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.175 05:31:34 version -- app/version.sh@19 -- # patch=0 00:06:46.175 05:31:34 version -- app/version.sh@20 -- # get_header_version suffix 00:06:46.175 05:31:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.175 05:31:34 version -- app/version.sh@14 -- # cut -f2 00:06:46.175 05:31:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.175 05:31:34 version -- app/version.sh@20 -- # suffix=-pre 00:06:46.175 05:31:34 version -- app/version.sh@22 -- # version=25.1 00:06:46.175 05:31:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:46.175 05:31:34 version -- app/version.sh@28 -- # version=25.1rc0 00:06:46.175 05:31:34 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:46.433 05:31:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:46.433 05:31:34 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:46.433 05:31:34 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:46.433 00:06:46.433 real 0m0.246s 00:06:46.433 user 0m0.161s 00:06:46.433 sys 0m0.128s 00:06:46.433 05:31:34 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.433 05:31:34 version -- common/autotest_common.sh@10 -- # set +x 00:06:46.433 ************************************ 00:06:46.433 END TEST version 00:06:46.433 ************************************ 00:06:46.433 05:31:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:46.433 05:31:34 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:46.433 05:31:34 -- spdk/autotest.sh@194 -- # uname -s 00:06:46.433 05:31:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:46.433 05:31:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.433 05:31:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.433 05:31:34 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:46.433 05:31:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:46.433 05:31:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:46.433 05:31:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.433 05:31:34 -- common/autotest_common.sh@10 -- # set +x 00:06:46.433 05:31:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:46.433 05:31:34 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:46.433 05:31:34 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:46.433 05:31:34 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:46.434 05:31:34 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:46.434 05:31:34 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:46.434 05:31:34 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:46.434 05:31:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.434 05:31:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.434 05:31:34 -- common/autotest_common.sh@10 -- # set +x 00:06:46.434 ************************************ 00:06:46.434 START TEST nvmf_tcp 00:06:46.434 ************************************ 00:06:46.434 05:31:34 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:46.434 * Looking for test storage... 00:06:46.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:46.434 05:31:34 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.434 05:31:34 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.434 05:31:34 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.693 05:31:34 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.693 05:31:34 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:46.693 05:31:34 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.693 05:31:34 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.693 --rc genhtml_branch_coverage=1 00:06:46.693 --rc genhtml_function_coverage=1 00:06:46.693 --rc genhtml_legend=1 00:06:46.693 --rc geninfo_all_blocks=1 00:06:46.693 --rc geninfo_unexecuted_blocks=1 00:06:46.693 00:06:46.693 ' 00:06:46.693 05:31:34 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.693 --rc genhtml_branch_coverage=1 00:06:46.693 --rc genhtml_function_coverage=1 00:06:46.693 --rc genhtml_legend=1 00:06:46.693 --rc geninfo_all_blocks=1 00:06:46.693 --rc geninfo_unexecuted_blocks=1 00:06:46.693 00:06:46.693 ' 00:06:46.693 05:31:34 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.693 --rc genhtml_branch_coverage=1 00:06:46.693 --rc genhtml_function_coverage=1 00:06:46.693 --rc genhtml_legend=1 00:06:46.693 --rc geninfo_all_blocks=1 00:06:46.693 --rc geninfo_unexecuted_blocks=1 00:06:46.693 00:06:46.693 ' 00:06:46.693 05:31:34 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.693 --rc genhtml_branch_coverage=1 00:06:46.693 --rc genhtml_function_coverage=1 00:06:46.693 --rc genhtml_legend=1 00:06:46.693 --rc geninfo_all_blocks=1 00:06:46.693 --rc geninfo_unexecuted_blocks=1 00:06:46.693 00:06:46.693 ' 00:06:46.693 05:31:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:46.693 05:31:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:46.693 05:31:34 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:46.693 05:31:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.693 05:31:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.693 05:31:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.693 ************************************ 00:06:46.693 START TEST nvmf_target_core 00:06:46.693 ************************************ 00:06:46.693 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:46.693 * Looking for test storage... 00:06:46.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:46.693 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.693 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.693 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.953 --rc genhtml_branch_coverage=1 00:06:46.953 --rc genhtml_function_coverage=1 00:06:46.953 --rc genhtml_legend=1 00:06:46.953 --rc geninfo_all_blocks=1 00:06:46.953 --rc geninfo_unexecuted_blocks=1 00:06:46.953 00:06:46.953 ' 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.953 --rc genhtml_branch_coverage=1 00:06:46.953 --rc genhtml_function_coverage=1 00:06:46.953 --rc genhtml_legend=1 00:06:46.953 --rc geninfo_all_blocks=1 00:06:46.953 --rc geninfo_unexecuted_blocks=1 00:06:46.953 00:06:46.953 ' 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.953 --rc genhtml_branch_coverage=1 00:06:46.953 --rc genhtml_function_coverage=1 00:06:46.953 --rc genhtml_legend=1 00:06:46.953 --rc geninfo_all_blocks=1 00:06:46.953 --rc geninfo_unexecuted_blocks=1 00:06:46.953 00:06:46.953 ' 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.953 --rc genhtml_branch_coverage=1 00:06:46.953 --rc genhtml_function_coverage=1 00:06:46.953 --rc genhtml_legend=1 00:06:46.953 --rc geninfo_all_blocks=1 00:06:46.953 --rc geninfo_unexecuted_blocks=1 00:06:46.953 00:06:46.953 ' 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.953 05:31:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:46.954 ************************************ 00:06:46.954 START TEST nvmf_abort 00:06:46.954 ************************************ 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:46.954 * Looking for test storage... 00:06:46.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.954 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:47.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.214 --rc genhtml_branch_coverage=1 00:06:47.214 --rc genhtml_function_coverage=1 00:06:47.214 --rc genhtml_legend=1 00:06:47.214 --rc geninfo_all_blocks=1 00:06:47.214 --rc geninfo_unexecuted_blocks=1 00:06:47.214 00:06:47.214 ' 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:47.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.214 --rc genhtml_branch_coverage=1 00:06:47.214 --rc genhtml_function_coverage=1 00:06:47.214 --rc genhtml_legend=1 00:06:47.214 --rc geninfo_all_blocks=1 00:06:47.214 --rc geninfo_unexecuted_blocks=1 00:06:47.214 00:06:47.214 ' 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:47.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.214 --rc genhtml_branch_coverage=1 00:06:47.214 --rc genhtml_function_coverage=1 00:06:47.214 --rc genhtml_legend=1 00:06:47.214 --rc geninfo_all_blocks=1 00:06:47.214 --rc geninfo_unexecuted_blocks=1 00:06:47.214 00:06:47.214 ' 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:47.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.214 --rc genhtml_branch_coverage=1 00:06:47.214 --rc genhtml_function_coverage=1 00:06:47.214 --rc genhtml_legend=1 00:06:47.214 --rc geninfo_all_blocks=1 00:06:47.214 --rc geninfo_unexecuted_blocks=1 00:06:47.214 00:06:47.214 ' 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.214 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:47.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:47.215 05:31:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:53.786 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:53.786 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:53.786 Found net devices under 0000:af:00.0: cvl_0_0 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.786 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:53.787 Found net devices under 0000:af:00.1: cvl_0_1 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:53.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:06:53.787 00:06:53.787 --- 10.0.0.2 ping statistics --- 00:06:53.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.787 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:06:53.787 00:06:53.787 --- 10.0.0.1 ping statistics --- 00:06:53.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.787 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1017194 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1017194 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1017194 ']' 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.787 05:31:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 [2024-12-10 05:31:41.024361] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:06:53.787 [2024-12-10 05:31:41.024408] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.787 [2024-12-10 05:31:41.105325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.787 [2024-12-10 05:31:41.147186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.787 [2024-12-10 05:31:41.147219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.787 [2024-12-10 05:31:41.147226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.787 [2024-12-10 05:31:41.147232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.787 [2024-12-10 05:31:41.147237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.787 [2024-12-10 05:31:41.150185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.787 [2024-12-10 05:31:41.150279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.787 [2024-12-10 05:31:41.150281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 [2024-12-10 05:31:41.287181] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 Malloc0 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 Delay0 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 [2024-12-10 05:31:41.369879] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:53.787 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.788 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.788 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.788 05:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:53.788 [2024-12-10 05:31:41.506956] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:56.322 Initializing NVMe Controllers 00:06:56.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:56.323 controller IO queue size 128 less than required 00:06:56.323 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:56.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:56.323 Initialization complete. Launching workers. 00:06:56.323 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37499 00:06:56.323 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37564, failed to submit 62 00:06:56.323 success 37503, unsuccessful 61, failed 0 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:56.323 rmmod nvme_tcp 00:06:56.323 rmmod nvme_fabrics 00:06:56.323 rmmod nvme_keyring 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1017194 ']' 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1017194 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1017194 ']' 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1017194 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1017194 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1017194' 00:06:56.323 killing process with pid 1017194 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1017194 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1017194 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:56.323 05:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.230 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:58.230 00:06:58.230 real 0m11.329s 00:06:58.230 user 0m11.781s 00:06:58.230 sys 0m5.434s 00:06:58.230 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.230 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:58.230 ************************************ 00:06:58.230 END TEST nvmf_abort 00:06:58.230 ************************************ 00:06:58.230 05:31:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:58.230 05:31:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.230 05:31:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.230 05:31:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:58.230 ************************************ 00:06:58.230 START TEST nvmf_ns_hotplug_stress 00:06:58.230 ************************************ 00:06:58.230 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:58.489 * Looking for test storage... 00:06:58.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:58.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.490 --rc genhtml_branch_coverage=1 00:06:58.490 --rc genhtml_function_coverage=1 00:06:58.490 --rc genhtml_legend=1 00:06:58.490 --rc geninfo_all_blocks=1 00:06:58.490 --rc geninfo_unexecuted_blocks=1 00:06:58.490 00:06:58.490 ' 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:58.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.490 --rc genhtml_branch_coverage=1 00:06:58.490 --rc genhtml_function_coverage=1 00:06:58.490 --rc genhtml_legend=1 00:06:58.490 --rc geninfo_all_blocks=1 00:06:58.490 --rc geninfo_unexecuted_blocks=1 00:06:58.490 00:06:58.490 ' 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:58.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.490 --rc genhtml_branch_coverage=1 00:06:58.490 --rc genhtml_function_coverage=1 00:06:58.490 --rc genhtml_legend=1 00:06:58.490 --rc geninfo_all_blocks=1 00:06:58.490 --rc geninfo_unexecuted_blocks=1 00:06:58.490 00:06:58.490 ' 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:58.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.490 --rc genhtml_branch_coverage=1 00:06:58.490 --rc genhtml_function_coverage=1 00:06:58.490 --rc genhtml_legend=1 00:06:58.490 --rc geninfo_all_blocks=1 00:06:58.490 --rc geninfo_unexecuted_blocks=1 00:06:58.490 00:06:58.490 ' 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:58.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:58.490 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:58.491 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:05.071 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:05.071 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:05.071 Found net devices under 0000:af:00.0: cvl_0_0 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:05.071 Found net devices under 0000:af:00.1: cvl_0_1 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:05.071 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:05.071 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:05.071 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:05.071 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:05.071 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.071 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.071 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.071 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:05.071 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:05.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:07:05.071 00:07:05.071 --- 10.0.0.2 ping statistics --- 00:07:05.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.072 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:07:05.072 00:07:05.072 --- 10.0.0.1 ping statistics --- 00:07:05.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.072 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1021145 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1021145 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1021145 ']' 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:05.072 [2024-12-10 05:31:52.301027] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:07:05.072 [2024-12-10 05:31:52.301073] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.072 [2024-12-10 05:31:52.378027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.072 [2024-12-10 05:31:52.418280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.072 [2024-12-10 05:31:52.418313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.072 [2024-12-10 05:31:52.418321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.072 [2024-12-10 05:31:52.418327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.072 [2024-12-10 05:31:52.418332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.072 [2024-12-10 05:31:52.419584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.072 [2024-12-10 05:31:52.419671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.072 [2024-12-10 05:31:52.419672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:05.072 [2024-12-10 05:31:52.728837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:05.072 05:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.331 [2024-12-10 05:31:53.134319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.331 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.590 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:05.848 Malloc0 00:07:05.849 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:05.849 Delay0 00:07:06.107 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.107 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:06.366 NULL1 00:07:06.366 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:06.625 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:06.625 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1021418 00:07:06.625 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:06.625 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.884 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.884 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:06.884 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:07.142 true 00:07:07.142 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:07.142 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.402 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.662 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:07.662 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:07.921 true 00:07:07.921 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:07.921 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.921 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.179 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:08.179 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:08.438 true 00:07:08.438 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:08.438 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.696 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.955 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:08.955 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:08.955 true 00:07:09.213 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:09.213 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.213 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.472 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:09.472 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:09.731 true 00:07:09.731 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:09.731 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.990 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.248 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:10.248 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:10.248 true 00:07:10.507 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:10.507 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.507 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.766 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:10.766 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:11.024 true 00:07:11.024 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:11.024 05:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.297 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.587 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:11.587 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:11.587 true 00:07:11.587 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:11.587 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.846 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.105 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:12.105 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:12.364 true 00:07:12.364 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:12.364 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.622 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.881 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:12.881 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:12.881 true 00:07:12.881 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:12.881 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.139 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.398 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:13.398 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:13.656 true 00:07:13.656 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:13.656 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.921 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.921 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:13.921 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:14.180 true 00:07:14.180 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:14.180 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.438 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.697 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:14.697 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:14.956 true 00:07:14.956 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:14.956 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.215 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.215 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:15.215 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:15.473 true 00:07:15.473 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:15.473 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.732 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.990 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:15.990 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:16.249 true 00:07:16.249 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:16.249 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.249 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.507 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:16.507 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:16.766 true 00:07:16.766 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:16.766 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.025 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.283 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:17.283 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:17.283 true 00:07:17.283 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:17.283 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.542 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.800 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:17.800 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:18.059 true 00:07:18.059 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:18.059 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.317 05:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.575 05:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:18.575 05:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:18.575 true 00:07:18.575 05:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:18.575 05:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.834 05:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.092 05:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:19.092 05:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:19.351 true 00:07:19.351 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:19.351 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.609 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.609 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:19.610 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:19.868 true 00:07:19.868 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:19.868 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.126 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.385 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:20.385 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:20.643 true 00:07:20.643 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:20.643 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.902 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.902 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:20.902 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:21.160 true 00:07:21.160 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:21.160 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.419 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.677 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:21.677 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:21.936 true 00:07:21.936 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:21.936 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.195 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.195 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:22.195 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:22.453 true 00:07:22.453 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:22.453 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.712 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.971 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:22.971 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:22.971 true 00:07:23.229 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:23.229 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.229 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.487 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:23.487 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:23.745 true 00:07:23.745 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:23.745 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.003 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.262 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:24.262 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:24.262 true 00:07:24.262 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:24.262 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.519 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.777 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:24.777 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:25.035 true 00:07:25.035 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:25.035 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.294 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.552 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:25.552 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:25.552 true 00:07:25.552 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:25.552 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.811 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.069 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:26.069 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:26.328 true 00:07:26.328 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:26.328 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.615 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.615 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:26.615 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:26.873 true 00:07:26.873 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:26.873 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.132 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.391 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:27.391 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:27.391 true 00:07:27.391 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:27.391 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.650 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.908 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:27.908 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:28.167 true 00:07:28.167 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:28.167 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.426 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.426 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:28.426 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:28.684 true 00:07:28.684 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:28.684 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.943 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.202 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:29.202 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:29.461 true 00:07:29.461 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:29.461 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.461 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.720 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:29.720 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:29.979 true 00:07:29.979 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:29.979 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.238 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.497 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:30.497 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:30.497 true 00:07:30.497 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:30.497 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.755 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.014 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:31.014 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:31.272 true 00:07:31.272 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:31.272 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.531 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.531 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:31.531 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:31.790 true 00:07:31.790 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:31.790 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.048 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.307 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:32.308 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:32.308 true 00:07:32.566 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:32.566 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.566 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.825 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:32.825 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:33.083 true 00:07:33.083 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:33.083 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.341 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.600 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:33.600 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:33.600 true 00:07:33.600 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:33.600 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.859 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.117 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:34.117 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:34.375 true 00:07:34.375 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:34.375 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.634 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.892 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:34.892 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:34.892 true 00:07:34.892 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:34.892 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.151 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.409 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:35.409 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:35.668 true 00:07:35.668 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:35.668 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.926 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.926 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:35.926 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:36.185 true 00:07:36.185 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:36.185 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.443 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.701 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:36.701 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:36.959 true 00:07:36.959 Initializing NVMe Controllers 00:07:36.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:36.959 Controller IO queue size 128, less than required. 00:07:36.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:36.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:36.959 Initialization complete. Launching workers. 00:07:36.959 ======================================================== 00:07:36.959 Latency(us) 00:07:36.959 Device Information : IOPS MiB/s Average min max 00:07:36.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27362.77 13.36 4677.69 2216.92 43505.59 00:07:36.959 ======================================================== 00:07:36.959 Total : 27362.77 13.36 4677.69 2216.92 43505.59 00:07:36.959 00:07:36.959 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1021418 00:07:36.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1021418) - No such process 00:07:36.959 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1021418 00:07:36.959 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.959 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.217 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:37.217 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:37.217 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:37.217 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.217 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:37.475 null0 00:07:37.475 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.475 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.475 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:37.732 null1 00:07:37.732 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.732 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.732 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:37.732 null2 00:07:37.991 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.991 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.991 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:37.991 null3 00:07:37.991 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.991 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.991 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:38.249 null4 00:07:38.249 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.249 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.249 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:38.507 null5 00:07:38.507 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.507 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.507 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:38.507 null6 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:38.766 null7 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.766 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1026989 1026991 1026994 1026998 1027001 1027004 1027008 1027011 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.767 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.031 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.031 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.031 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.031 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.031 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.031 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.031 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.031 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.292 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.551 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.551 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.551 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.551 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.552 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.811 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.070 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.070 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.070 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.070 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.070 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.070 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.070 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.071 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.330 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.330 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.330 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.330 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.330 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.330 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.330 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.330 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.589 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.848 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.848 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.848 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.848 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.848 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.848 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.849 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.108 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.108 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.108 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.108 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.108 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.108 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.108 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.108 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.367 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.626 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.626 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.626 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.626 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.626 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.626 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.626 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.626 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.886 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.145 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.404 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.404 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.404 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.404 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.404 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.404 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.404 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.404 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.662 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.662 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.662 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.662 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.663 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.922 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.181 rmmod nvme_tcp 00:07:43.181 rmmod nvme_fabrics 00:07:43.181 rmmod nvme_keyring 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1021145 ']' 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1021145 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1021145 ']' 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1021145 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021145 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021145' 00:07:43.181 killing process with pid 1021145 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1021145 00:07:43.181 05:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1021145 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.440 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.345 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.345 00:07:45.345 real 0m47.107s 00:07:45.345 user 3m21.341s 00:07:45.345 sys 0m16.957s 00:07:45.345 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.345 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.345 ************************************ 00:07:45.345 END TEST nvmf_ns_hotplug_stress 00:07:45.345 ************************************ 00:07:45.345 05:32:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:45.345 05:32:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.345 05:32:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.345 05:32:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.603 ************************************ 00:07:45.603 START TEST nvmf_delete_subsystem 00:07:45.603 ************************************ 00:07:45.603 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:45.603 * Looking for test storage... 00:07:45.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.603 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.603 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.603 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.603 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.603 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.603 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.604 --rc genhtml_branch_coverage=1 00:07:45.604 --rc genhtml_function_coverage=1 00:07:45.604 --rc genhtml_legend=1 00:07:45.604 --rc geninfo_all_blocks=1 00:07:45.604 --rc geninfo_unexecuted_blocks=1 00:07:45.604 00:07:45.604 ' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.604 --rc genhtml_branch_coverage=1 00:07:45.604 --rc genhtml_function_coverage=1 00:07:45.604 --rc genhtml_legend=1 00:07:45.604 --rc geninfo_all_blocks=1 00:07:45.604 --rc geninfo_unexecuted_blocks=1 00:07:45.604 00:07:45.604 ' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.604 --rc genhtml_branch_coverage=1 00:07:45.604 --rc genhtml_function_coverage=1 00:07:45.604 --rc genhtml_legend=1 00:07:45.604 --rc geninfo_all_blocks=1 00:07:45.604 --rc geninfo_unexecuted_blocks=1 00:07:45.604 00:07:45.604 ' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.604 --rc genhtml_branch_coverage=1 00:07:45.604 --rc genhtml_function_coverage=1 00:07:45.604 --rc genhtml_legend=1 00:07:45.604 --rc geninfo_all_blocks=1 00:07:45.604 --rc geninfo_unexecuted_blocks=1 00:07:45.604 00:07:45.604 ' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.604 05:32:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:52.339 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:52.339 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.339 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:52.340 Found net devices under 0000:af:00.0: cvl_0_0 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:52.340 Found net devices under 0000:af:00.1: cvl_0_1 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:07:52.340 00:07:52.340 --- 10.0.0.2 ping statistics --- 00:07:52.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.340 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:07:52.340 00:07:52.340 --- 10.0.0.1 ping statistics --- 00:07:52.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.340 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1031476 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1031476 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1031476 ']' 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.340 [2024-12-10 05:32:39.501163] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:07:52.340 [2024-12-10 05:32:39.501239] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.340 [2024-12-10 05:32:39.580489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.340 [2024-12-10 05:32:39.618653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.340 [2024-12-10 05:32:39.618689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.340 [2024-12-10 05:32:39.618696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.340 [2024-12-10 05:32:39.618703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.340 [2024-12-10 05:32:39.618708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.340 [2024-12-10 05:32:39.619878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.340 [2024-12-10 05:32:39.619879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.340 [2024-12-10 05:32:39.764088] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.340 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.341 [2024-12-10 05:32:39.788319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.341 NULL1 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.341 Delay0 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1031503 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:52.341 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:52.341 [2024-12-10 05:32:39.896079] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:54.244 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.245 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.245 05:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 [2024-12-10 05:32:42.055685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a92c0 is same with the state(6) to be set 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 Write completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 Read completed with error (sct=0, sc=8) 00:07:54.245 starting I/O failed: -6 00:07:54.245 starting I/O failed: -6 00:07:54.245 starting I/O failed: -6 00:07:54.245 starting I/O failed: -6 00:07:54.245 starting I/O failed: -6 00:07:55.182 [2024-12-10 05:32:43.031951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aa9b0 is same with the state(6) to be set 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 [2024-12-10 05:32:43.056241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a9b40 is same with the state(6) to be set 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 [2024-12-10 05:32:43.058726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f560000d060 is same with the state(6) to be set 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 [2024-12-10 05:32:43.058859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f560000d6c0 is same with the state(6) to be set 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Write completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 Read completed with error (sct=0, sc=8) 00:07:55.182 [2024-12-10 05:32:43.059570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5600000c80 is same with the state(6) to be set 00:07:55.182 Initializing NVMe Controllers 00:07:55.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:55.182 Controller IO queue size 128, less than required. 00:07:55.182 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:55.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:55.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:55.182 Initialization complete. Launching workers. 00:07:55.182 ======================================================== 00:07:55.182 Latency(us) 00:07:55.182 Device Information : IOPS MiB/s Average min max 00:07:55.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 155.00 0.08 892415.92 233.29 2000791.78 00:07:55.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.48 0.08 1097948.96 305.24 2003435.08 00:07:55.182 ======================================================== 00:07:55.182 Total : 314.48 0.15 996643.58 233.29 2003435.08 00:07:55.182 00:07:55.182 [2024-12-10 05:32:43.060149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aa9b0 (9): Bad file descriptor 00:07:55.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:55.182 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.182 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:55.182 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1031503 00:07:55.182 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1031503 00:07:55.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1031503) - No such process 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1031503 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1031503 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1031503 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.750 [2024-12-10 05:32:43.585769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1032179 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1032179 00:07:55.750 05:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.009 [2024-12-10 05:32:43.679137] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:56.268 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:56.268 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1032179 00:07:56.268 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.835 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:56.835 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1032179 00:07:56.835 05:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:57.404 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.404 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1032179 00:07:57.404 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:57.975 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.975 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1032179 00:07:57.975 05:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:58.234 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:58.234 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1032179 00:07:58.234 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:58.801 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:58.801 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1032179 00:07:58.801 05:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:59.060 Initializing NVMe Controllers 00:07:59.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:59.060 Controller IO queue size 128, less than required. 00:07:59.060 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:59.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:59.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:59.060 Initialization complete. Launching workers. 00:07:59.060 ======================================================== 00:07:59.060 Latency(us) 00:07:59.060 Device Information : IOPS MiB/s Average min max 00:07:59.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002740.33 1000150.68 1040553.76 00:07:59.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003699.23 1000156.19 1011502.88 00:07:59.060 ======================================================== 00:07:59.060 Total : 256.00 0.12 1003219.78 1000150.68 1040553.76 00:07:59.060 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1032179 00:07:59.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1032179) - No such process 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1032179 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.319 rmmod nvme_tcp 00:07:59.319 rmmod nvme_fabrics 00:07:59.319 rmmod nvme_keyring 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1031476 ']' 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1031476 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1031476 ']' 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1031476 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.319 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1031476 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1031476' 00:07:59.581 killing process with pid 1031476 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1031476 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1031476 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.581 05:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.119 00:08:02.119 real 0m16.212s 00:08:02.119 user 0m29.306s 00:08:02.119 sys 0m5.460s 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.119 ************************************ 00:08:02.119 END TEST nvmf_delete_subsystem 00:08:02.119 ************************************ 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.119 ************************************ 00:08:02.119 START TEST nvmf_host_management 00:08:02.119 ************************************ 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:02.119 * Looking for test storage... 00:08:02.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:02.119 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.120 --rc genhtml_branch_coverage=1 00:08:02.120 --rc genhtml_function_coverage=1 00:08:02.120 --rc genhtml_legend=1 00:08:02.120 --rc geninfo_all_blocks=1 00:08:02.120 --rc geninfo_unexecuted_blocks=1 00:08:02.120 00:08:02.120 ' 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.120 --rc genhtml_branch_coverage=1 00:08:02.120 --rc genhtml_function_coverage=1 00:08:02.120 --rc genhtml_legend=1 00:08:02.120 --rc geninfo_all_blocks=1 00:08:02.120 --rc geninfo_unexecuted_blocks=1 00:08:02.120 00:08:02.120 ' 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.120 --rc genhtml_branch_coverage=1 00:08:02.120 --rc genhtml_function_coverage=1 00:08:02.120 --rc genhtml_legend=1 00:08:02.120 --rc geninfo_all_blocks=1 00:08:02.120 --rc geninfo_unexecuted_blocks=1 00:08:02.120 00:08:02.120 ' 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.120 --rc genhtml_branch_coverage=1 00:08:02.120 --rc genhtml_function_coverage=1 00:08:02.120 --rc genhtml_legend=1 00:08:02.120 --rc geninfo_all_blocks=1 00:08:02.120 --rc geninfo_unexecuted_blocks=1 00:08:02.120 00:08:02.120 ' 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.120 05:32:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:08.702 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:08.702 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:08.702 Found net devices under 0000:af:00.0: cvl_0_0 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:08.702 Found net devices under 0000:af:00.1: cvl_0_1 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.702 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:08:08.703 00:08:08.703 --- 10.0.0.2 ping statistics --- 00:08:08.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.703 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:08:08.703 00:08:08.703 --- 10.0.0.1 ping statistics --- 00:08:08.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.703 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1036323 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1036323 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1036323 ']' 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.703 05:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.703 [2024-12-10 05:32:55.849229] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:08:08.703 [2024-12-10 05:32:55.849280] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.703 [2024-12-10 05:32:55.928346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.703 [2024-12-10 05:32:55.968171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.703 [2024-12-10 05:32:55.968210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.703 [2024-12-10 05:32:55.968217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.703 [2024-12-10 05:32:55.968222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.703 [2024-12-10 05:32:55.968229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.703 [2024-12-10 05:32:55.969568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.703 [2024-12-10 05:32:55.969673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.703 [2024-12-10 05:32:55.969762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:08.703 [2024-12-10 05:32:55.969769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.962 [2024-12-10 05:32:56.740170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.962 Malloc0 00:08:08.962 [2024-12-10 05:32:56.816757] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:08.962 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1036588 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1036588 /var/tmp/bdevperf.sock 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1036588 ']' 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:09.222 { 00:08:09.222 "params": { 00:08:09.222 "name": "Nvme$subsystem", 00:08:09.222 "trtype": "$TEST_TRANSPORT", 00:08:09.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:09.222 "adrfam": "ipv4", 00:08:09.222 "trsvcid": "$NVMF_PORT", 00:08:09.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:09.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:09.222 "hdgst": ${hdgst:-false}, 00:08:09.222 "ddgst": ${ddgst:-false} 00:08:09.222 }, 00:08:09.222 "method": "bdev_nvme_attach_controller" 00:08:09.222 } 00:08:09.222 EOF 00:08:09.222 )") 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:09.222 05:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:09.222 "params": { 00:08:09.222 "name": "Nvme0", 00:08:09.222 "trtype": "tcp", 00:08:09.222 "traddr": "10.0.0.2", 00:08:09.222 "adrfam": "ipv4", 00:08:09.222 "trsvcid": "4420", 00:08:09.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:09.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:09.222 "hdgst": false, 00:08:09.222 "ddgst": false 00:08:09.222 }, 00:08:09.222 "method": "bdev_nvme_attach_controller" 00:08:09.222 }' 00:08:09.222 [2024-12-10 05:32:56.911481] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:08:09.222 [2024-12-10 05:32:56.911529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036588 ] 00:08:09.222 [2024-12-10 05:32:56.985009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.222 [2024-12-10 05:32:57.024846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.481 Running I/O for 10 seconds... 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=95 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 95 -ge 100 ']' 00:08:09.481 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.742 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.742 [2024-12-10 05:32:57.583512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:09.742 [2024-12-10 05:32:57.583552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.583563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:09.742 [2024-12-10 05:32:57.583570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.583578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:09.742 [2024-12-10 05:32:57.583585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.583592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:09.742 [2024-12-10 05:32:57.583604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.583611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb07e0 is same with the state(6) to be set 00:08:09.742 [2024-12-10 05:32:57.585059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.742 [2024-12-10 05:32:57.585396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.742 [2024-12-10 05:32:57.585404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.743 [2024-12-10 05:32:57.585864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.743 [2024-12-10 05:32:57.585870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.585878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.744 [2024-12-10 05:32:57.585884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.585893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.744 [2024-12-10 05:32:57.585899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.585907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.744 [2024-12-10 05:32:57.585914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.585921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.744 [2024-12-10 05:32:57.585928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.585936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.744 [2024-12-10 05:32:57.585943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.585951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.744 [2024-12-10 05:32:57.585958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.585966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.744 [2024-12-10 05:32:57.585974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.585982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.744 [2024-12-10 05:32:57.585989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.585997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.744 [2024-12-10 05:32:57.586006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.586014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.744 [2024-12-10 05:32:57.586020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.586028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.744 [2024-12-10 05:32:57.586034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.744 [2024-12-10 05:32:57.586041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c9770 is same with the state(6) to be set 00:08:09.744 [2024-12-10 05:32:57.586995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:09.744 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.744 task offset: 106368 on job bdev=Nvme0n1 fails 00:08:09.744 00:08:09.744 Latency(us) 00:08:09.744 [2024-12-10T04:32:57.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.744 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:09.744 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:09.744 Verification LBA range: start 0x0 length 0x400 00:08:09.744 Nvme0n1 : 0.40 1912.26 119.52 159.35 0.00 30072.16 4525.10 27213.04 00:08:09.744 [2024-12-10T04:32:57.640Z] =================================================================================================================== 00:08:09.744 [2024-12-10T04:32:57.640Z] Total : 1912.26 119.52 159.35 0.00 30072.16 4525.10 27213.04 00:08:09.744 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:09.744 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.744 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.744 [2024-12-10 05:32:57.589324] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.744 [2024-12-10 05:32:57.589343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb07e0 (9): Bad file descriptor 00:08:09.744 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.744 05:32:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:09.744 [2024-12-10 05:32:57.609977] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1036588 00:08:11.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1036588) - No such process 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:11.121 { 00:08:11.121 "params": { 00:08:11.121 "name": "Nvme$subsystem", 00:08:11.121 "trtype": "$TEST_TRANSPORT", 00:08:11.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:11.121 "adrfam": "ipv4", 00:08:11.121 "trsvcid": "$NVMF_PORT", 00:08:11.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:11.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:11.121 "hdgst": ${hdgst:-false}, 00:08:11.121 "ddgst": ${ddgst:-false} 00:08:11.121 }, 00:08:11.121 "method": "bdev_nvme_attach_controller" 00:08:11.121 } 00:08:11.121 EOF 00:08:11.121 )") 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:11.121 05:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:11.121 "params": { 00:08:11.121 "name": "Nvme0", 00:08:11.121 "trtype": "tcp", 00:08:11.121 "traddr": "10.0.0.2", 00:08:11.121 "adrfam": "ipv4", 00:08:11.121 "trsvcid": "4420", 00:08:11.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:11.121 "hdgst": false, 00:08:11.121 "ddgst": false 00:08:11.121 }, 00:08:11.121 "method": "bdev_nvme_attach_controller" 00:08:11.121 }' 00:08:11.121 [2024-12-10 05:32:58.651217] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:08:11.121 [2024-12-10 05:32:58.651266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036835 ] 00:08:11.121 [2024-12-10 05:32:58.723650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.121 [2024-12-10 05:32:58.761464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.380 Running I/O for 1 seconds... 00:08:12.317 2004.00 IOPS, 125.25 MiB/s 00:08:12.317 Latency(us) 00:08:12.317 [2024-12-10T04:33:00.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.317 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:12.317 Verification LBA range: start 0x0 length 0x400 00:08:12.317 Nvme0n1 : 1.01 2055.80 128.49 0.00 0.00 30538.44 1856.85 27088.21 00:08:12.317 [2024-12-10T04:33:00.213Z] =================================================================================================================== 00:08:12.317 [2024-12-10T04:33:00.213Z] Total : 2055.80 128.49 0.00 0.00 30538.44 1856.85 27088.21 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.576 rmmod nvme_tcp 00:08:12.576 rmmod nvme_fabrics 00:08:12.576 rmmod nvme_keyring 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1036323 ']' 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1036323 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1036323 ']' 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1036323 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1036323 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1036323' 00:08:12.576 killing process with pid 1036323 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1036323 00:08:12.576 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1036323 00:08:12.835 [2024-12-10 05:33:00.510346] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.835 05:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.740 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.740 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:14.740 00:08:14.740 real 0m13.057s 00:08:14.740 user 0m22.323s 00:08:14.740 sys 0m5.584s 00:08:14.740 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.740 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.740 ************************************ 00:08:14.740 END TEST nvmf_host_management 00:08:14.740 ************************************ 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.000 ************************************ 00:08:15.000 START TEST nvmf_lvol 00:08:15.000 ************************************ 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:15.000 * Looking for test storage... 00:08:15.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:15.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.000 --rc genhtml_branch_coverage=1 00:08:15.000 --rc genhtml_function_coverage=1 00:08:15.000 --rc genhtml_legend=1 00:08:15.000 --rc geninfo_all_blocks=1 00:08:15.000 --rc geninfo_unexecuted_blocks=1 00:08:15.000 00:08:15.000 ' 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:15.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.000 --rc genhtml_branch_coverage=1 00:08:15.000 --rc genhtml_function_coverage=1 00:08:15.000 --rc genhtml_legend=1 00:08:15.000 --rc geninfo_all_blocks=1 00:08:15.000 --rc geninfo_unexecuted_blocks=1 00:08:15.000 00:08:15.000 ' 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:15.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.000 --rc genhtml_branch_coverage=1 00:08:15.000 --rc genhtml_function_coverage=1 00:08:15.000 --rc genhtml_legend=1 00:08:15.000 --rc geninfo_all_blocks=1 00:08:15.000 --rc geninfo_unexecuted_blocks=1 00:08:15.000 00:08:15.000 ' 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:15.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.000 --rc genhtml_branch_coverage=1 00:08:15.000 --rc genhtml_function_coverage=1 00:08:15.000 --rc genhtml_legend=1 00:08:15.000 --rc geninfo_all_blocks=1 00:08:15.000 --rc geninfo_unexecuted_blocks=1 00:08:15.000 00:08:15.000 ' 00:08:15.000 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.001 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.260 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.260 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.260 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:15.260 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.261 05:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:21.833 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:21.834 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:21.834 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:21.834 Found net devices under 0000:af:00.0: cvl_0_0 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:21.834 Found net devices under 0000:af:00.1: cvl_0_1 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:21.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:08:21.834 00:08:21.834 --- 10.0.0.2 ping statistics --- 00:08:21.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.834 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:08:21.834 00:08:21.834 --- 10.0.0.1 ping statistics --- 00:08:21.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.834 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1041199 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1041199 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1041199 ']' 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.834 05:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.834 [2024-12-10 05:33:09.008933] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:08:21.834 [2024-12-10 05:33:09.008983] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.834 [2024-12-10 05:33:09.089141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.834 [2024-12-10 05:33:09.128194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.834 [2024-12-10 05:33:09.128231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.834 [2024-12-10 05:33:09.128238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.834 [2024-12-10 05:33:09.128245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.834 [2024-12-10 05:33:09.128250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.834 [2024-12-10 05:33:09.129557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.834 [2024-12-10 05:33:09.129593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.834 [2024-12-10 05:33:09.129594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.834 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.834 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:21.834 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:21.835 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.835 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.835 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.835 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:21.835 [2024-12-10 05:33:09.434894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.835 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.835 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:21.835 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:22.094 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:22.094 05:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:22.352 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:22.611 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d93395ef-bd59-410e-aec6-c586c336363e 00:08:22.611 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d93395ef-bd59-410e-aec6-c586c336363e lvol 20 00:08:22.869 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f571570d-947a-4209-abc8-2ea79561b47f 00:08:22.869 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:22.869 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f571570d-947a-4209-abc8-2ea79561b47f 00:08:23.129 05:33:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:23.387 [2024-12-10 05:33:11.107581] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.387 05:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.646 05:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1041546 00:08:23.646 05:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:23.646 05:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:24.587 05:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f571570d-947a-4209-abc8-2ea79561b47f MY_SNAPSHOT 00:08:24.846 05:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9bdd86f7-5975-4cb8-bde2-fd5eb343c25e 00:08:24.846 05:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f571570d-947a-4209-abc8-2ea79561b47f 30 00:08:25.104 05:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9bdd86f7-5975-4cb8-bde2-fd5eb343c25e MY_CLONE 00:08:25.363 05:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f5905d99-13cc-485c-89fb-e634b60d4c4a 00:08:25.363 05:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f5905d99-13cc-485c-89fb-e634b60d4c4a 00:08:25.931 05:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1041546 00:08:34.047 Initializing NVMe Controllers 00:08:34.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:34.047 Controller IO queue size 128, less than required. 00:08:34.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:34.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:34.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:34.047 Initialization complete. Launching workers. 00:08:34.047 ======================================================== 00:08:34.048 Latency(us) 00:08:34.048 Device Information : IOPS MiB/s Average min max 00:08:34.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12408.20 48.47 10316.47 1509.86 103302.29 00:08:34.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12338.70 48.20 10374.20 3620.17 39884.22 00:08:34.048 ======================================================== 00:08:34.048 Total : 24746.90 96.67 10345.25 1509.86 103302.29 00:08:34.048 00:08:34.048 05:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:34.306 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f571570d-947a-4209-abc8-2ea79561b47f 00:08:34.565 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d93395ef-bd59-410e-aec6-c586c336363e 00:08:34.565 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:34.565 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:34.565 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:34.565 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.565 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:34.565 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.565 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:34.565 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.565 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.565 rmmod nvme_tcp 00:08:34.565 rmmod nvme_fabrics 00:08:34.824 rmmod nvme_keyring 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1041199 ']' 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1041199 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1041199 ']' 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1041199 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1041199 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1041199' 00:08:34.824 killing process with pid 1041199 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1041199 00:08:34.824 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1041199 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.084 05:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.989 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:36.989 00:08:36.989 real 0m22.125s 00:08:36.989 user 1m3.459s 00:08:36.989 sys 0m7.655s 00:08:36.989 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.989 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:36.989 ************************************ 00:08:36.989 END TEST nvmf_lvol 00:08:36.989 ************************************ 00:08:36.989 05:33:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:36.989 05:33:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.989 05:33:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.989 05:33:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.248 ************************************ 00:08:37.248 START TEST nvmf_lvs_grow 00:08:37.248 ************************************ 00:08:37.248 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:37.248 * Looking for test storage... 00:08:37.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.248 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.248 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.248 05:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:37.248 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.249 --rc genhtml_branch_coverage=1 00:08:37.249 --rc genhtml_function_coverage=1 00:08:37.249 --rc genhtml_legend=1 00:08:37.249 --rc geninfo_all_blocks=1 00:08:37.249 --rc geninfo_unexecuted_blocks=1 00:08:37.249 00:08:37.249 ' 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.249 --rc genhtml_branch_coverage=1 00:08:37.249 --rc genhtml_function_coverage=1 00:08:37.249 --rc genhtml_legend=1 00:08:37.249 --rc geninfo_all_blocks=1 00:08:37.249 --rc geninfo_unexecuted_blocks=1 00:08:37.249 00:08:37.249 ' 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.249 --rc genhtml_branch_coverage=1 00:08:37.249 --rc genhtml_function_coverage=1 00:08:37.249 --rc genhtml_legend=1 00:08:37.249 --rc geninfo_all_blocks=1 00:08:37.249 --rc geninfo_unexecuted_blocks=1 00:08:37.249 00:08:37.249 ' 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.249 --rc genhtml_branch_coverage=1 00:08:37.249 --rc genhtml_function_coverage=1 00:08:37.249 --rc genhtml_legend=1 00:08:37.249 --rc geninfo_all_blocks=1 00:08:37.249 --rc geninfo_unexecuted_blocks=1 00:08:37.249 00:08:37.249 ' 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.249 05:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.826 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:43.827 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:43.827 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:43.827 Found net devices under 0000:af:00.0: cvl_0_0 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:43.827 Found net devices under 0000:af:00.1: cvl_0_1 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:43.827 05:33:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:43.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:08:43.827 00:08:43.827 --- 10.0.0.2 ping statistics --- 00:08:43.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.827 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:08:43.827 00:08:43.827 --- 10.0.0.1 ping statistics --- 00:08:43.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.827 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1047024 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1047024 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1047024 ']' 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.827 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.827 [2024-12-10 05:33:31.130710] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:08:43.828 [2024-12-10 05:33:31.130759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.828 [2024-12-10 05:33:31.209915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.828 [2024-12-10 05:33:31.248332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.828 [2024-12-10 05:33:31.248370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.828 [2024-12-10 05:33:31.248377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.828 [2024-12-10 05:33:31.248383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.828 [2024-12-10 05:33:31.248389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.828 [2024-12-10 05:33:31.248871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:43.828 [2024-12-10 05:33:31.552583] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.828 ************************************ 00:08:43.828 START TEST lvs_grow_clean 00:08:43.828 ************************************ 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.828 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:44.087 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:44.087 05:33:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:44.346 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:44.346 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:44.346 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:44.346 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:44.346 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:44.346 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e lvol 150 00:08:44.605 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6512413c-f3f8-4941-a878-9b6071acb540 00:08:44.605 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.605 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:44.864 [2024-12-10 05:33:32.564036] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:44.864 [2024-12-10 05:33:32.564085] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:44.864 true 00:08:44.864 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:44.864 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:45.124 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:45.124 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.124 05:33:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6512413c-f3f8-4941-a878-9b6071acb540 00:08:45.383 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:45.642 [2024-12-10 05:33:33.302265] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.642 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.642 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1047481 00:08:45.642 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.642 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:45.642 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1047481 /var/tmp/bdevperf.sock 00:08:45.642 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1047481 ']' 00:08:45.642 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.642 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.642 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.642 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.642 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:45.901 [2024-12-10 05:33:33.543688] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:08:45.901 [2024-12-10 05:33:33.543733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047481 ] 00:08:45.901 [2024-12-10 05:33:33.617424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.901 [2024-12-10 05:33:33.657661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.901 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.901 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:45.901 05:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:46.160 Nvme0n1 00:08:46.160 05:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:46.418 [ 00:08:46.418 { 00:08:46.418 "name": "Nvme0n1", 00:08:46.418 "aliases": [ 00:08:46.418 "6512413c-f3f8-4941-a878-9b6071acb540" 00:08:46.418 ], 00:08:46.418 "product_name": "NVMe disk", 00:08:46.418 "block_size": 4096, 00:08:46.418 "num_blocks": 38912, 00:08:46.418 "uuid": "6512413c-f3f8-4941-a878-9b6071acb540", 00:08:46.418 "numa_id": 1, 00:08:46.418 "assigned_rate_limits": { 00:08:46.418 "rw_ios_per_sec": 0, 00:08:46.418 "rw_mbytes_per_sec": 0, 00:08:46.418 "r_mbytes_per_sec": 0, 00:08:46.418 "w_mbytes_per_sec": 0 00:08:46.418 }, 00:08:46.418 "claimed": false, 00:08:46.418 "zoned": false, 00:08:46.418 "supported_io_types": { 00:08:46.418 "read": true, 00:08:46.418 "write": true, 00:08:46.418 "unmap": true, 00:08:46.418 "flush": true, 00:08:46.418 "reset": true, 00:08:46.418 "nvme_admin": true, 00:08:46.418 "nvme_io": true, 00:08:46.418 "nvme_io_md": false, 00:08:46.418 "write_zeroes": true, 00:08:46.418 "zcopy": false, 00:08:46.418 "get_zone_info": false, 00:08:46.418 "zone_management": false, 00:08:46.418 "zone_append": false, 00:08:46.418 "compare": true, 00:08:46.418 "compare_and_write": true, 00:08:46.418 "abort": true, 00:08:46.418 "seek_hole": false, 00:08:46.418 "seek_data": false, 00:08:46.418 "copy": true, 00:08:46.418 "nvme_iov_md": false 00:08:46.418 }, 00:08:46.418 "memory_domains": [ 00:08:46.418 { 00:08:46.418 "dma_device_id": "system", 00:08:46.418 "dma_device_type": 1 00:08:46.418 } 00:08:46.418 ], 00:08:46.418 "driver_specific": { 00:08:46.418 "nvme": [ 00:08:46.418 { 00:08:46.418 "trid": { 00:08:46.418 "trtype": "TCP", 00:08:46.418 "adrfam": "IPv4", 00:08:46.418 "traddr": "10.0.0.2", 00:08:46.418 "trsvcid": "4420", 00:08:46.418 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:46.418 }, 00:08:46.418 "ctrlr_data": { 00:08:46.418 "cntlid": 1, 00:08:46.418 "vendor_id": "0x8086", 00:08:46.418 "model_number": "SPDK bdev Controller", 00:08:46.418 "serial_number": "SPDK0", 00:08:46.418 "firmware_revision": "25.01", 00:08:46.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:46.418 "oacs": { 00:08:46.418 "security": 0, 00:08:46.419 "format": 0, 00:08:46.419 "firmware": 0, 00:08:46.419 "ns_manage": 0 00:08:46.419 }, 00:08:46.419 "multi_ctrlr": true, 00:08:46.419 "ana_reporting": false 00:08:46.419 }, 00:08:46.419 "vs": { 00:08:46.419 "nvme_version": "1.3" 00:08:46.419 }, 00:08:46.419 "ns_data": { 00:08:46.419 "id": 1, 00:08:46.419 "can_share": true 00:08:46.419 } 00:08:46.419 } 00:08:46.419 ], 00:08:46.419 "mp_policy": "active_passive" 00:08:46.419 } 00:08:46.419 } 00:08:46.419 ] 00:08:46.419 05:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1047528 00:08:46.419 05:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:46.419 05:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:46.677 Running I/O for 10 seconds... 00:08:47.612 Latency(us) 00:08:47.612 [2024-12-10T04:33:35.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.612 Nvme0n1 : 1.00 23628.00 92.30 0.00 0.00 0.00 0.00 0.00 00:08:47.612 [2024-12-10T04:33:35.508Z] =================================================================================================================== 00:08:47.612 [2024-12-10T04:33:35.508Z] Total : 23628.00 92.30 0.00 0.00 0.00 0.00 0.00 00:08:47.612 00:08:48.549 05:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:48.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.549 Nvme0n1 : 2.00 23723.00 92.67 0.00 0.00 0.00 0.00 0.00 00:08:48.549 [2024-12-10T04:33:36.445Z] =================================================================================================================== 00:08:48.549 [2024-12-10T04:33:36.445Z] Total : 23723.00 92.67 0.00 0.00 0.00 0.00 0.00 00:08:48.549 00:08:48.807 true 00:08:48.807 05:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:48.807 05:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:48.807 05:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:48.807 05:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:48.807 05:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1047528 00:08:49.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.746 Nvme0n1 : 3.00 23753.00 92.79 0.00 0.00 0.00 0.00 0.00 00:08:49.746 [2024-12-10T04:33:37.642Z] =================================================================================================================== 00:08:49.746 [2024-12-10T04:33:37.642Z] Total : 23753.00 92.79 0.00 0.00 0.00 0.00 0.00 00:08:49.746 00:08:50.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.477 Nvme0n1 : 4.00 23764.25 92.83 0.00 0.00 0.00 0.00 0.00 00:08:50.477 [2024-12-10T04:33:38.373Z] =================================================================================================================== 00:08:50.477 [2024-12-10T04:33:38.373Z] Total : 23764.25 92.83 0.00 0.00 0.00 0.00 0.00 00:08:50.477 00:08:51.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.854 Nvme0n1 : 5.00 23800.40 92.97 0.00 0.00 0.00 0.00 0.00 00:08:51.854 [2024-12-10T04:33:39.750Z] =================================================================================================================== 00:08:51.854 [2024-12-10T04:33:39.750Z] Total : 23800.40 92.97 0.00 0.00 0.00 0.00 0.00 00:08:51.854 00:08:52.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.791 Nvme0n1 : 6.00 23793.33 92.94 0.00 0.00 0.00 0.00 0.00 00:08:52.791 [2024-12-10T04:33:40.687Z] =================================================================================================================== 00:08:52.791 [2024-12-10T04:33:40.687Z] Total : 23793.33 92.94 0.00 0.00 0.00 0.00 0.00 00:08:52.791 00:08:53.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.727 Nvme0n1 : 7.00 23787.57 92.92 0.00 0.00 0.00 0.00 0.00 00:08:53.727 [2024-12-10T04:33:41.623Z] =================================================================================================================== 00:08:53.727 [2024-12-10T04:33:41.623Z] Total : 23787.57 92.92 0.00 0.00 0.00 0.00 0.00 00:08:53.727 00:08:54.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.663 Nvme0n1 : 8.00 23816.88 93.03 0.00 0.00 0.00 0.00 0.00 00:08:54.663 [2024-12-10T04:33:42.559Z] =================================================================================================================== 00:08:54.663 [2024-12-10T04:33:42.559Z] Total : 23816.88 93.03 0.00 0.00 0.00 0.00 0.00 00:08:54.663 00:08:55.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.599 Nvme0n1 : 9.00 23845.89 93.15 0.00 0.00 0.00 0.00 0.00 00:08:55.599 [2024-12-10T04:33:43.495Z] =================================================================================================================== 00:08:55.599 [2024-12-10T04:33:43.495Z] Total : 23845.89 93.15 0.00 0.00 0.00 0.00 0.00 00:08:55.599 00:08:56.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.535 Nvme0n1 : 10.00 23862.20 93.21 0.00 0.00 0.00 0.00 0.00 00:08:56.535 [2024-12-10T04:33:44.431Z] =================================================================================================================== 00:08:56.535 [2024-12-10T04:33:44.431Z] Total : 23862.20 93.21 0.00 0.00 0.00 0.00 0.00 00:08:56.535 00:08:56.535 00:08:56.535 Latency(us) 00:08:56.535 [2024-12-10T04:33:44.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.535 Nvme0n1 : 10.00 23867.34 93.23 0.00 0.00 5359.95 1724.22 11234.74 00:08:56.535 [2024-12-10T04:33:44.431Z] =================================================================================================================== 00:08:56.535 [2024-12-10T04:33:44.431Z] Total : 23867.34 93.23 0.00 0.00 5359.95 1724.22 11234.74 00:08:56.535 { 00:08:56.536 "results": [ 00:08:56.536 { 00:08:56.536 "job": "Nvme0n1", 00:08:56.536 "core_mask": "0x2", 00:08:56.536 "workload": "randwrite", 00:08:56.536 "status": "finished", 00:08:56.536 "queue_depth": 128, 00:08:56.536 "io_size": 4096, 00:08:56.536 "runtime": 10.003211, 00:08:56.536 "iops": 23867.33619834671, 00:08:56.536 "mibps": 93.23178202479184, 00:08:56.536 "io_failed": 0, 00:08:56.536 "io_timeout": 0, 00:08:56.536 "avg_latency_us": 5359.95352020344, 00:08:56.536 "min_latency_us": 1724.2209523809524, 00:08:56.536 "max_latency_us": 11234.742857142857 00:08:56.536 } 00:08:56.536 ], 00:08:56.536 "core_count": 1 00:08:56.536 } 00:08:56.536 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1047481 00:08:56.536 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1047481 ']' 00:08:56.536 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1047481 00:08:56.536 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:56.536 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.536 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1047481 00:08:56.794 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:56.794 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:56.794 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1047481' 00:08:56.794 killing process with pid 1047481 00:08:56.794 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1047481 00:08:56.794 Received shutdown signal, test time was about 10.000000 seconds 00:08:56.794 00:08:56.794 Latency(us) 00:08:56.794 [2024-12-10T04:33:44.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.794 [2024-12-10T04:33:44.690Z] =================================================================================================================== 00:08:56.794 [2024-12-10T04:33:44.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:56.794 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1047481 00:08:56.794 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:57.053 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:57.312 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:57.312 05:33:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:57.312 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:57.312 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:57.312 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:57.571 [2024-12-10 05:33:45.336814] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:57.571 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:57.830 request: 00:08:57.830 { 00:08:57.830 "uuid": "99c020c3-dfe2-4f55-81d5-ff92f339e25e", 00:08:57.830 "method": "bdev_lvol_get_lvstores", 00:08:57.830 "req_id": 1 00:08:57.830 } 00:08:57.830 Got JSON-RPC error response 00:08:57.830 response: 00:08:57.830 { 00:08:57.830 "code": -19, 00:08:57.830 "message": "No such device" 00:08:57.830 } 00:08:57.830 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:57.830 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:57.830 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:57.830 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:57.830 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.089 aio_bdev 00:08:58.089 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6512413c-f3f8-4941-a878-9b6071acb540 00:08:58.089 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6512413c-f3f8-4941-a878-9b6071acb540 00:08:58.089 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.089 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:58.089 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.089 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.089 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:58.089 05:33:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6512413c-f3f8-4941-a878-9b6071acb540 -t 2000 00:08:58.348 [ 00:08:58.348 { 00:08:58.348 "name": "6512413c-f3f8-4941-a878-9b6071acb540", 00:08:58.348 "aliases": [ 00:08:58.348 "lvs/lvol" 00:08:58.348 ], 00:08:58.348 "product_name": "Logical Volume", 00:08:58.348 "block_size": 4096, 00:08:58.348 "num_blocks": 38912, 00:08:58.348 "uuid": "6512413c-f3f8-4941-a878-9b6071acb540", 00:08:58.348 "assigned_rate_limits": { 00:08:58.348 "rw_ios_per_sec": 0, 00:08:58.348 "rw_mbytes_per_sec": 0, 00:08:58.348 "r_mbytes_per_sec": 0, 00:08:58.348 "w_mbytes_per_sec": 0 00:08:58.348 }, 00:08:58.348 "claimed": false, 00:08:58.348 "zoned": false, 00:08:58.348 "supported_io_types": { 00:08:58.348 "read": true, 00:08:58.348 "write": true, 00:08:58.348 "unmap": true, 00:08:58.348 "flush": false, 00:08:58.348 "reset": true, 00:08:58.348 "nvme_admin": false, 00:08:58.348 "nvme_io": false, 00:08:58.348 "nvme_io_md": false, 00:08:58.348 "write_zeroes": true, 00:08:58.348 "zcopy": false, 00:08:58.348 "get_zone_info": false, 00:08:58.348 "zone_management": false, 00:08:58.348 "zone_append": false, 00:08:58.348 "compare": false, 00:08:58.348 "compare_and_write": false, 00:08:58.348 "abort": false, 00:08:58.348 "seek_hole": true, 00:08:58.348 "seek_data": true, 00:08:58.348 "copy": false, 00:08:58.348 "nvme_iov_md": false 00:08:58.348 }, 00:08:58.348 "driver_specific": { 00:08:58.348 "lvol": { 00:08:58.348 "lvol_store_uuid": "99c020c3-dfe2-4f55-81d5-ff92f339e25e", 00:08:58.348 "base_bdev": "aio_bdev", 00:08:58.348 "thin_provision": false, 00:08:58.348 "num_allocated_clusters": 38, 00:08:58.348 "snapshot": false, 00:08:58.348 "clone": false, 00:08:58.348 "esnap_clone": false 00:08:58.348 } 00:08:58.348 } 00:08:58.348 } 00:08:58.348 ] 00:08:58.348 05:33:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:58.348 05:33:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:58.348 05:33:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:58.607 05:33:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:58.607 05:33:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:58.607 05:33:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:58.607 05:33:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:58.607 05:33:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6512413c-f3f8-4941-a878-9b6071acb540 00:08:58.866 05:33:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99c020c3-dfe2-4f55-81d5-ff92f339e25e 00:08:59.125 05:33:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.384 00:08:59.384 real 0m15.492s 00:08:59.384 user 0m15.078s 00:08:59.384 sys 0m1.463s 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:59.384 ************************************ 00:08:59.384 END TEST lvs_grow_clean 00:08:59.384 ************************************ 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.384 ************************************ 00:08:59.384 START TEST lvs_grow_dirty 00:08:59.384 ************************************ 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:59.384 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.385 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.385 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.644 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:59.644 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:59.903 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f7b739b9-5710-4196-b3e7-74fe3581fd20 00:08:59.903 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:08:59.903 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:59.903 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:59.903 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:59.903 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f7b739b9-5710-4196-b3e7-74fe3581fd20 lvol 150 00:09:00.162 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=99346ba3-ccf3-4ab9-b273-b233e54f9d7d 00:09:00.162 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:00.162 05:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:00.420 [2024-12-10 05:33:48.125038] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:00.420 [2024-12-10 05:33:48.125090] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:00.420 true 00:09:00.421 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:00.421 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:00.680 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:00.680 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.680 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 99346ba3-ccf3-4ab9-b273-b233e54f9d7d 00:09:00.940 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:01.199 [2024-12-10 05:33:48.847179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.199 05:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:01.199 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1050053 00:09:01.199 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:01.199 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:01.199 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1050053 /var/tmp/bdevperf.sock 00:09:01.199 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1050053 ']' 00:09:01.199 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:01.199 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.199 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:01.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:01.199 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.199 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.199 [2024-12-10 05:33:49.083747] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:09:01.199 [2024-12-10 05:33:49.083795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050053 ] 00:09:01.458 [2024-12-10 05:33:49.158883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.458 [2024-12-10 05:33:49.199443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.458 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.458 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:01.458 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:02.025 Nvme0n1 00:09:02.026 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:02.026 [ 00:09:02.026 { 00:09:02.026 "name": "Nvme0n1", 00:09:02.026 "aliases": [ 00:09:02.026 "99346ba3-ccf3-4ab9-b273-b233e54f9d7d" 00:09:02.026 ], 00:09:02.026 "product_name": "NVMe disk", 00:09:02.026 "block_size": 4096, 00:09:02.026 "num_blocks": 38912, 00:09:02.026 "uuid": "99346ba3-ccf3-4ab9-b273-b233e54f9d7d", 00:09:02.026 "numa_id": 1, 00:09:02.026 "assigned_rate_limits": { 00:09:02.026 "rw_ios_per_sec": 0, 00:09:02.026 "rw_mbytes_per_sec": 0, 00:09:02.026 "r_mbytes_per_sec": 0, 00:09:02.026 "w_mbytes_per_sec": 0 00:09:02.026 }, 00:09:02.026 "claimed": false, 00:09:02.026 "zoned": false, 00:09:02.026 "supported_io_types": { 00:09:02.026 "read": true, 00:09:02.026 "write": true, 00:09:02.026 "unmap": true, 00:09:02.026 "flush": true, 00:09:02.026 "reset": true, 00:09:02.026 "nvme_admin": true, 00:09:02.026 "nvme_io": true, 00:09:02.026 "nvme_io_md": false, 00:09:02.026 "write_zeroes": true, 00:09:02.026 "zcopy": false, 00:09:02.026 "get_zone_info": false, 00:09:02.026 "zone_management": false, 00:09:02.026 "zone_append": false, 00:09:02.026 "compare": true, 00:09:02.026 "compare_and_write": true, 00:09:02.026 "abort": true, 00:09:02.026 "seek_hole": false, 00:09:02.026 "seek_data": false, 00:09:02.026 "copy": true, 00:09:02.026 "nvme_iov_md": false 00:09:02.026 }, 00:09:02.026 "memory_domains": [ 00:09:02.026 { 00:09:02.026 "dma_device_id": "system", 00:09:02.026 "dma_device_type": 1 00:09:02.026 } 00:09:02.026 ], 00:09:02.026 "driver_specific": { 00:09:02.026 "nvme": [ 00:09:02.026 { 00:09:02.026 "trid": { 00:09:02.026 "trtype": "TCP", 00:09:02.026 "adrfam": "IPv4", 00:09:02.026 "traddr": "10.0.0.2", 00:09:02.026 "trsvcid": "4420", 00:09:02.026 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:02.026 }, 00:09:02.026 "ctrlr_data": { 00:09:02.026 "cntlid": 1, 00:09:02.026 "vendor_id": "0x8086", 00:09:02.026 "model_number": "SPDK bdev Controller", 00:09:02.026 "serial_number": "SPDK0", 00:09:02.026 "firmware_revision": "25.01", 00:09:02.026 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:02.026 "oacs": { 00:09:02.026 "security": 0, 00:09:02.026 "format": 0, 00:09:02.026 "firmware": 0, 00:09:02.026 "ns_manage": 0 00:09:02.026 }, 00:09:02.026 "multi_ctrlr": true, 00:09:02.026 "ana_reporting": false 00:09:02.026 }, 00:09:02.026 "vs": { 00:09:02.026 "nvme_version": "1.3" 00:09:02.026 }, 00:09:02.026 "ns_data": { 00:09:02.026 "id": 1, 00:09:02.026 "can_share": true 00:09:02.026 } 00:09:02.026 } 00:09:02.026 ], 00:09:02.026 "mp_policy": "active_passive" 00:09:02.026 } 00:09:02.026 } 00:09:02.026 ] 00:09:02.026 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1050277 00:09:02.026 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:02.026 05:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:02.285 Running I/O for 10 seconds... 00:09:03.223 Latency(us) 00:09:03.223 [2024-12-10T04:33:51.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.223 Nvme0n1 : 1.00 23054.00 90.05 0.00 0.00 0.00 0.00 0.00 00:09:03.223 [2024-12-10T04:33:51.119Z] =================================================================================================================== 00:09:03.223 [2024-12-10T04:33:51.119Z] Total : 23054.00 90.05 0.00 0.00 0.00 0.00 0.00 00:09:03.223 00:09:04.159 05:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:04.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.159 Nvme0n1 : 2.00 23397.00 91.39 0.00 0.00 0.00 0.00 0.00 00:09:04.159 [2024-12-10T04:33:52.055Z] =================================================================================================================== 00:09:04.159 [2024-12-10T04:33:52.055Z] Total : 23397.00 91.39 0.00 0.00 0.00 0.00 0.00 00:09:04.159 00:09:04.159 true 00:09:04.418 05:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:04.418 05:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:04.418 05:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:04.418 05:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:04.418 05:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1050277 00:09:05.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.357 Nvme0n1 : 3.00 23506.67 91.82 0.00 0.00 0.00 0.00 0.00 00:09:05.357 [2024-12-10T04:33:53.253Z] =================================================================================================================== 00:09:05.357 [2024-12-10T04:33:53.253Z] Total : 23506.67 91.82 0.00 0.00 0.00 0.00 0.00 00:09:05.357 00:09:06.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.293 Nvme0n1 : 4.00 23619.00 92.26 0.00 0.00 0.00 0.00 0.00 00:09:06.293 [2024-12-10T04:33:54.189Z] =================================================================================================================== 00:09:06.293 [2024-12-10T04:33:54.189Z] Total : 23619.00 92.26 0.00 0.00 0.00 0.00 0.00 00:09:06.293 00:09:07.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.230 Nvme0n1 : 5.00 23682.80 92.51 0.00 0.00 0.00 0.00 0.00 00:09:07.230 [2024-12-10T04:33:55.126Z] =================================================================================================================== 00:09:07.230 [2024-12-10T04:33:55.126Z] Total : 23682.80 92.51 0.00 0.00 0.00 0.00 0.00 00:09:07.230 00:09:08.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.166 Nvme0n1 : 6.00 23739.00 92.73 0.00 0.00 0.00 0.00 0.00 00:09:08.166 [2024-12-10T04:33:56.062Z] =================================================================================================================== 00:09:08.166 [2024-12-10T04:33:56.062Z] Total : 23739.00 92.73 0.00 0.00 0.00 0.00 0.00 00:09:08.166 00:09:09.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.102 Nvme0n1 : 7.00 23782.71 92.90 0.00 0.00 0.00 0.00 0.00 00:09:09.102 [2024-12-10T04:33:56.998Z] =================================================================================================================== 00:09:09.102 [2024-12-10T04:33:56.998Z] Total : 23782.71 92.90 0.00 0.00 0.00 0.00 0.00 00:09:09.102 00:09:10.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.480 Nvme0n1 : 8.00 23804.25 92.99 0.00 0.00 0.00 0.00 0.00 00:09:10.480 [2024-12-10T04:33:58.376Z] =================================================================================================================== 00:09:10.480 [2024-12-10T04:33:58.376Z] Total : 23804.25 92.99 0.00 0.00 0.00 0.00 0.00 00:09:10.480 00:09:11.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.420 Nvme0n1 : 9.00 23833.89 93.10 0.00 0.00 0.00 0.00 0.00 00:09:11.420 [2024-12-10T04:33:59.316Z] =================================================================================================================== 00:09:11.420 [2024-12-10T04:33:59.316Z] Total : 23833.89 93.10 0.00 0.00 0.00 0.00 0.00 00:09:11.420 00:09:12.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.357 Nvme0n1 : 10.00 23857.80 93.19 0.00 0.00 0.00 0.00 0.00 00:09:12.357 [2024-12-10T04:34:00.253Z] =================================================================================================================== 00:09:12.357 [2024-12-10T04:34:00.253Z] Total : 23857.80 93.19 0.00 0.00 0.00 0.00 0.00 00:09:12.357 00:09:12.357 00:09:12.357 Latency(us) 00:09:12.357 [2024-12-10T04:34:00.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.357 Nvme0n1 : 10.00 23858.74 93.20 0.00 0.00 5361.98 3183.18 12170.97 00:09:12.357 [2024-12-10T04:34:00.253Z] =================================================================================================================== 00:09:12.357 [2024-12-10T04:34:00.253Z] Total : 23858.74 93.20 0.00 0.00 5361.98 3183.18 12170.97 00:09:12.357 { 00:09:12.357 "results": [ 00:09:12.357 { 00:09:12.357 "job": "Nvme0n1", 00:09:12.357 "core_mask": "0x2", 00:09:12.357 "workload": "randwrite", 00:09:12.357 "status": "finished", 00:09:12.357 "queue_depth": 128, 00:09:12.357 "io_size": 4096, 00:09:12.357 "runtime": 10.004971, 00:09:12.357 "iops": 23858.73982043526, 00:09:12.357 "mibps": 93.19820242357524, 00:09:12.357 "io_failed": 0, 00:09:12.357 "io_timeout": 0, 00:09:12.357 "avg_latency_us": 5361.98218518656, 00:09:12.357 "min_latency_us": 3183.177142857143, 00:09:12.357 "max_latency_us": 12170.971428571429 00:09:12.357 } 00:09:12.357 ], 00:09:12.357 "core_count": 1 00:09:12.357 } 00:09:12.357 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1050053 00:09:12.357 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1050053 ']' 00:09:12.357 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1050053 00:09:12.357 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:12.357 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.357 05:33:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1050053 00:09:12.357 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:12.357 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:12.357 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1050053' 00:09:12.357 killing process with pid 1050053 00:09:12.357 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1050053 00:09:12.357 Received shutdown signal, test time was about 10.000000 seconds 00:09:12.357 00:09:12.357 Latency(us) 00:09:12.357 [2024-12-10T04:34:00.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.357 [2024-12-10T04:34:00.253Z] =================================================================================================================== 00:09:12.357 [2024-12-10T04:34:00.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:12.357 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1050053 00:09:12.357 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.616 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:12.875 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:12.875 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1047024 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1047024 00:09:13.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1047024 Killed "${NVMF_APP[@]}" "$@" 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1052082 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1052082 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1052082 ']' 00:09:13.134 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.135 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.135 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.135 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.135 05:34:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:13.135 [2024-12-10 05:34:00.943741] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:09:13.135 [2024-12-10 05:34:00.943787] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.135 [2024-12-10 05:34:01.020975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.394 [2024-12-10 05:34:01.059785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.394 [2024-12-10 05:34:01.059818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.394 [2024-12-10 05:34:01.059824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.394 [2024-12-10 05:34:01.059830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.394 [2024-12-10 05:34:01.059835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.394 [2024-12-10 05:34:01.060323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.394 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.394 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:13.394 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.394 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.394 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:13.394 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.394 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:13.652 [2024-12-10 05:34:01.353090] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:13.652 [2024-12-10 05:34:01.353181] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:13.652 [2024-12-10 05:34:01.353207] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:13.652 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:13.652 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 99346ba3-ccf3-4ab9-b273-b233e54f9d7d 00:09:13.652 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=99346ba3-ccf3-4ab9-b273-b233e54f9d7d 00:09:13.652 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.652 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:13.652 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.652 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.652 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:13.911 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 99346ba3-ccf3-4ab9-b273-b233e54f9d7d -t 2000 00:09:13.911 [ 00:09:13.911 { 00:09:13.911 "name": "99346ba3-ccf3-4ab9-b273-b233e54f9d7d", 00:09:13.911 "aliases": [ 00:09:13.911 "lvs/lvol" 00:09:13.911 ], 00:09:13.911 "product_name": "Logical Volume", 00:09:13.911 "block_size": 4096, 00:09:13.911 "num_blocks": 38912, 00:09:13.911 "uuid": "99346ba3-ccf3-4ab9-b273-b233e54f9d7d", 00:09:13.911 "assigned_rate_limits": { 00:09:13.911 "rw_ios_per_sec": 0, 00:09:13.911 "rw_mbytes_per_sec": 0, 00:09:13.911 "r_mbytes_per_sec": 0, 00:09:13.911 "w_mbytes_per_sec": 0 00:09:13.911 }, 00:09:13.911 "claimed": false, 00:09:13.911 "zoned": false, 00:09:13.911 "supported_io_types": { 00:09:13.911 "read": true, 00:09:13.911 "write": true, 00:09:13.911 "unmap": true, 00:09:13.911 "flush": false, 00:09:13.911 "reset": true, 00:09:13.911 "nvme_admin": false, 00:09:13.911 "nvme_io": false, 00:09:13.911 "nvme_io_md": false, 00:09:13.911 "write_zeroes": true, 00:09:13.911 "zcopy": false, 00:09:13.911 "get_zone_info": false, 00:09:13.911 "zone_management": false, 00:09:13.911 "zone_append": false, 00:09:13.911 "compare": false, 00:09:13.911 "compare_and_write": false, 00:09:13.911 "abort": false, 00:09:13.912 "seek_hole": true, 00:09:13.912 "seek_data": true, 00:09:13.912 "copy": false, 00:09:13.912 "nvme_iov_md": false 00:09:13.912 }, 00:09:13.912 "driver_specific": { 00:09:13.912 "lvol": { 00:09:13.912 "lvol_store_uuid": "f7b739b9-5710-4196-b3e7-74fe3581fd20", 00:09:13.912 "base_bdev": "aio_bdev", 00:09:13.912 "thin_provision": false, 00:09:13.912 "num_allocated_clusters": 38, 00:09:13.912 "snapshot": false, 00:09:13.912 "clone": false, 00:09:13.912 "esnap_clone": false 00:09:13.912 } 00:09:13.912 } 00:09:13.912 } 00:09:13.912 ] 00:09:13.912 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:13.912 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:13.912 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:14.171 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:14.171 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:14.171 05:34:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:14.430 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:14.430 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.430 [2024-12-10 05:34:02.298069] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:14.688 request: 00:09:14.688 { 00:09:14.688 "uuid": "f7b739b9-5710-4196-b3e7-74fe3581fd20", 00:09:14.688 "method": "bdev_lvol_get_lvstores", 00:09:14.688 "req_id": 1 00:09:14.688 } 00:09:14.688 Got JSON-RPC error response 00:09:14.688 response: 00:09:14.688 { 00:09:14.688 "code": -19, 00:09:14.688 "message": "No such device" 00:09:14.688 } 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.688 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.947 aio_bdev 00:09:14.947 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 99346ba3-ccf3-4ab9-b273-b233e54f9d7d 00:09:14.947 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=99346ba3-ccf3-4ab9-b273-b233e54f9d7d 00:09:14.947 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.947 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:14.947 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.947 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.947 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:15.206 05:34:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 99346ba3-ccf3-4ab9-b273-b233e54f9d7d -t 2000 00:09:15.206 [ 00:09:15.206 { 00:09:15.206 "name": "99346ba3-ccf3-4ab9-b273-b233e54f9d7d", 00:09:15.206 "aliases": [ 00:09:15.206 "lvs/lvol" 00:09:15.206 ], 00:09:15.206 "product_name": "Logical Volume", 00:09:15.206 "block_size": 4096, 00:09:15.206 "num_blocks": 38912, 00:09:15.206 "uuid": "99346ba3-ccf3-4ab9-b273-b233e54f9d7d", 00:09:15.206 "assigned_rate_limits": { 00:09:15.206 "rw_ios_per_sec": 0, 00:09:15.206 "rw_mbytes_per_sec": 0, 00:09:15.206 "r_mbytes_per_sec": 0, 00:09:15.206 "w_mbytes_per_sec": 0 00:09:15.206 }, 00:09:15.206 "claimed": false, 00:09:15.206 "zoned": false, 00:09:15.206 "supported_io_types": { 00:09:15.206 "read": true, 00:09:15.206 "write": true, 00:09:15.206 "unmap": true, 00:09:15.206 "flush": false, 00:09:15.206 "reset": true, 00:09:15.206 "nvme_admin": false, 00:09:15.206 "nvme_io": false, 00:09:15.206 "nvme_io_md": false, 00:09:15.206 "write_zeroes": true, 00:09:15.206 "zcopy": false, 00:09:15.206 "get_zone_info": false, 00:09:15.206 "zone_management": false, 00:09:15.206 "zone_append": false, 00:09:15.206 "compare": false, 00:09:15.206 "compare_and_write": false, 00:09:15.206 "abort": false, 00:09:15.207 "seek_hole": true, 00:09:15.207 "seek_data": true, 00:09:15.207 "copy": false, 00:09:15.207 "nvme_iov_md": false 00:09:15.207 }, 00:09:15.207 "driver_specific": { 00:09:15.207 "lvol": { 00:09:15.207 "lvol_store_uuid": "f7b739b9-5710-4196-b3e7-74fe3581fd20", 00:09:15.207 "base_bdev": "aio_bdev", 00:09:15.207 "thin_provision": false, 00:09:15.207 "num_allocated_clusters": 38, 00:09:15.207 "snapshot": false, 00:09:15.207 "clone": false, 00:09:15.207 "esnap_clone": false 00:09:15.207 } 00:09:15.207 } 00:09:15.207 } 00:09:15.207 ] 00:09:15.207 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:15.207 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:15.207 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:15.465 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:15.465 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:15.465 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:15.724 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:15.724 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 99346ba3-ccf3-4ab9-b273-b233e54f9d7d 00:09:15.724 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f7b739b9-5710-4196-b3e7-74fe3581fd20 00:09:15.984 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.242 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:16.242 00:09:16.242 real 0m16.813s 00:09:16.242 user 0m43.643s 00:09:16.242 sys 0m3.600s 00:09:16.242 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.242 05:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:16.242 ************************************ 00:09:16.242 END TEST lvs_grow_dirty 00:09:16.242 ************************************ 00:09:16.242 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:16.243 nvmf_trace.0 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:16.243 rmmod nvme_tcp 00:09:16.243 rmmod nvme_fabrics 00:09:16.243 rmmod nvme_keyring 00:09:16.243 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1052082 ']' 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1052082 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1052082 ']' 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1052082 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1052082 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1052082' 00:09:16.502 killing process with pid 1052082 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1052082 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1052082 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.502 05:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.038 00:09:19.038 real 0m41.534s 00:09:19.038 user 1m4.289s 00:09:19.038 sys 0m9.935s 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:19.038 ************************************ 00:09:19.038 END TEST nvmf_lvs_grow 00:09:19.038 ************************************ 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.038 ************************************ 00:09:19.038 START TEST nvmf_bdev_io_wait 00:09:19.038 ************************************ 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:19.038 * Looking for test storage... 00:09:19.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.038 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.039 --rc genhtml_branch_coverage=1 00:09:19.039 --rc genhtml_function_coverage=1 00:09:19.039 --rc genhtml_legend=1 00:09:19.039 --rc geninfo_all_blocks=1 00:09:19.039 --rc geninfo_unexecuted_blocks=1 00:09:19.039 00:09:19.039 ' 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.039 --rc genhtml_branch_coverage=1 00:09:19.039 --rc genhtml_function_coverage=1 00:09:19.039 --rc genhtml_legend=1 00:09:19.039 --rc geninfo_all_blocks=1 00:09:19.039 --rc geninfo_unexecuted_blocks=1 00:09:19.039 00:09:19.039 ' 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.039 --rc genhtml_branch_coverage=1 00:09:19.039 --rc genhtml_function_coverage=1 00:09:19.039 --rc genhtml_legend=1 00:09:19.039 --rc geninfo_all_blocks=1 00:09:19.039 --rc geninfo_unexecuted_blocks=1 00:09:19.039 00:09:19.039 ' 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.039 --rc genhtml_branch_coverage=1 00:09:19.039 --rc genhtml_function_coverage=1 00:09:19.039 --rc genhtml_legend=1 00:09:19.039 --rc geninfo_all_blocks=1 00:09:19.039 --rc geninfo_unexecuted_blocks=1 00:09:19.039 00:09:19.039 ' 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.039 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.040 05:34:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:25.611 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:25.611 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:25.611 Found net devices under 0000:af:00.0: cvl_0_0 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:25.611 Found net devices under 0000:af:00.1: cvl_0_1 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.611 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:09:25.612 00:09:25.612 --- 10.0.0.2 ping statistics --- 00:09:25.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.612 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:09:25.612 00:09:25.612 --- 10.0.0.1 ping statistics --- 00:09:25.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.612 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1056073 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1056073 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1056073 ']' 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.612 [2024-12-10 05:34:12.683045] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:09:25.612 [2024-12-10 05:34:12.683087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.612 [2024-12-10 05:34:12.763036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.612 [2024-12-10 05:34:12.805312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.612 [2024-12-10 05:34:12.805350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.612 [2024-12-10 05:34:12.805357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.612 [2024-12-10 05:34:12.805364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.612 [2024-12-10 05:34:12.805369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.612 [2024-12-10 05:34:12.806838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.612 [2024-12-10 05:34:12.806947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.612 [2024-12-10 05:34:12.807057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.612 [2024-12-10 05:34:12.807057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.612 [2024-12-10 05:34:12.955043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.612 Malloc0 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.612 05:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.612 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.612 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.612 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.612 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:25.612 [2024-12-10 05:34:13.010329] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.612 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.612 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1056303 00:09:25.612 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:25.612 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:25.612 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1056305 00:09:25.612 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:25.613 { 00:09:25.613 "params": { 00:09:25.613 "name": "Nvme$subsystem", 00:09:25.613 "trtype": "$TEST_TRANSPORT", 00:09:25.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:25.613 "adrfam": "ipv4", 00:09:25.613 "trsvcid": "$NVMF_PORT", 00:09:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:25.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:25.613 "hdgst": ${hdgst:-false}, 00:09:25.613 "ddgst": ${ddgst:-false} 00:09:25.613 }, 00:09:25.613 "method": "bdev_nvme_attach_controller" 00:09:25.613 } 00:09:25.613 EOF 00:09:25.613 )") 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1056307 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:25.613 { 00:09:25.613 "params": { 00:09:25.613 "name": "Nvme$subsystem", 00:09:25.613 "trtype": "$TEST_TRANSPORT", 00:09:25.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:25.613 "adrfam": "ipv4", 00:09:25.613 "trsvcid": "$NVMF_PORT", 00:09:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:25.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:25.613 "hdgst": ${hdgst:-false}, 00:09:25.613 "ddgst": ${ddgst:-false} 00:09:25.613 }, 00:09:25.613 "method": "bdev_nvme_attach_controller" 00:09:25.613 } 00:09:25.613 EOF 00:09:25.613 )") 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1056310 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:25.613 { 00:09:25.613 "params": { 00:09:25.613 "name": "Nvme$subsystem", 00:09:25.613 "trtype": "$TEST_TRANSPORT", 00:09:25.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:25.613 "adrfam": "ipv4", 00:09:25.613 "trsvcid": "$NVMF_PORT", 00:09:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:25.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:25.613 "hdgst": ${hdgst:-false}, 00:09:25.613 "ddgst": ${ddgst:-false} 00:09:25.613 }, 00:09:25.613 "method": "bdev_nvme_attach_controller" 00:09:25.613 } 00:09:25.613 EOF 00:09:25.613 )") 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:25.613 { 00:09:25.613 "params": { 00:09:25.613 "name": "Nvme$subsystem", 00:09:25.613 "trtype": "$TEST_TRANSPORT", 00:09:25.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:25.613 "adrfam": "ipv4", 00:09:25.613 "trsvcid": "$NVMF_PORT", 00:09:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:25.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:25.613 "hdgst": ${hdgst:-false}, 00:09:25.613 "ddgst": ${ddgst:-false} 00:09:25.613 }, 00:09:25.613 "method": "bdev_nvme_attach_controller" 00:09:25.613 } 00:09:25.613 EOF 00:09:25.613 )") 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1056303 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:25.613 "params": { 00:09:25.613 "name": "Nvme1", 00:09:25.613 "trtype": "tcp", 00:09:25.613 "traddr": "10.0.0.2", 00:09:25.613 "adrfam": "ipv4", 00:09:25.613 "trsvcid": "4420", 00:09:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:25.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:25.613 "hdgst": false, 00:09:25.613 "ddgst": false 00:09:25.613 }, 00:09:25.613 "method": "bdev_nvme_attach_controller" 00:09:25.613 }' 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:25.613 "params": { 00:09:25.613 "name": "Nvme1", 00:09:25.613 "trtype": "tcp", 00:09:25.613 "traddr": "10.0.0.2", 00:09:25.613 "adrfam": "ipv4", 00:09:25.613 "trsvcid": "4420", 00:09:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:25.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:25.613 "hdgst": false, 00:09:25.613 "ddgst": false 00:09:25.613 }, 00:09:25.613 "method": "bdev_nvme_attach_controller" 00:09:25.613 }' 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:25.613 "params": { 00:09:25.613 "name": "Nvme1", 00:09:25.613 "trtype": "tcp", 00:09:25.613 "traddr": "10.0.0.2", 00:09:25.613 "adrfam": "ipv4", 00:09:25.613 "trsvcid": "4420", 00:09:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:25.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:25.613 "hdgst": false, 00:09:25.613 "ddgst": false 00:09:25.613 }, 00:09:25.613 "method": "bdev_nvme_attach_controller" 00:09:25.613 }' 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:25.613 05:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:25.613 "params": { 00:09:25.613 "name": "Nvme1", 00:09:25.613 "trtype": "tcp", 00:09:25.613 "traddr": "10.0.0.2", 00:09:25.613 "adrfam": "ipv4", 00:09:25.613 "trsvcid": "4420", 00:09:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:25.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:25.613 "hdgst": false, 00:09:25.613 "ddgst": false 00:09:25.613 }, 00:09:25.613 "method": "bdev_nvme_attach_controller" 00:09:25.613 }' 00:09:25.613 [2024-12-10 05:34:13.062951] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:09:25.613 [2024-12-10 05:34:13.062998] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:25.613 [2024-12-10 05:34:13.064612] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:09:25.613 [2024-12-10 05:34:13.064661] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:25.613 [2024-12-10 05:34:13.064656] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:09:25.613 [2024-12-10 05:34:13.064699] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:25.613 [2024-12-10 05:34:13.064974] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:09:25.613 [2024-12-10 05:34:13.065010] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:25.613 [2024-12-10 05:34:13.248829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.613 [2024-12-10 05:34:13.293636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:25.613 [2024-12-10 05:34:13.348875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.613 [2024-12-10 05:34:13.394044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:25.613 [2024-12-10 05:34:13.446244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.871 [2024-12-10 05:34:13.501169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.872 [2024-12-10 05:34:13.505590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:25.872 [2024-12-10 05:34:13.542994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:25.872 Running I/O for 1 seconds... 00:09:25.872 Running I/O for 1 seconds... 00:09:26.130 Running I/O for 1 seconds... 00:09:26.130 Running I/O for 1 seconds... 00:09:27.067 11761.00 IOPS, 45.94 MiB/s 00:09:27.067 Latency(us) 00:09:27.067 [2024-12-10T04:34:14.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.067 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:27.067 Nvme1n1 : 1.01 11806.04 46.12 0.00 0.00 10801.06 6303.94 13668.94 00:09:27.067 [2024-12-10T04:34:14.963Z] =================================================================================================================== 00:09:27.067 [2024-12-10T04:34:14.963Z] Total : 11806.04 46.12 0.00 0.00 10801.06 6303.94 13668.94 00:09:27.067 9660.00 IOPS, 37.73 MiB/s 00:09:27.067 Latency(us) 00:09:27.067 [2024-12-10T04:34:14.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.067 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:27.067 Nvme1n1 : 1.01 9727.63 38.00 0.00 0.00 13111.05 5430.13 20846.69 00:09:27.067 [2024-12-10T04:34:14.963Z] =================================================================================================================== 00:09:27.067 [2024-12-10T04:34:14.963Z] Total : 9727.63 38.00 0.00 0.00 13111.05 5430.13 20846.69 00:09:27.067 11543.00 IOPS, 45.09 MiB/s 00:09:27.067 Latency(us) 00:09:27.067 [2024-12-10T04:34:14.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.067 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:27.067 Nvme1n1 : 1.00 11626.40 45.42 0.00 0.00 10983.65 3042.74 22219.82 00:09:27.067 [2024-12-10T04:34:14.963Z] =================================================================================================================== 00:09:27.067 [2024-12-10T04:34:14.963Z] Total : 11626.40 45.42 0.00 0.00 10983.65 3042.74 22219.82 00:09:27.067 242648.00 IOPS, 947.84 MiB/s 00:09:27.068 Latency(us) 00:09:27.068 [2024-12-10T04:34:14.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.068 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:27.068 Nvme1n1 : 1.00 242283.85 946.42 0.00 0.00 525.68 219.43 1482.36 00:09:27.068 [2024-12-10T04:34:14.964Z] =================================================================================================================== 00:09:27.068 [2024-12-10T04:34:14.964Z] Total : 242283.85 946.42 0.00 0.00 525.68 219.43 1482.36 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1056305 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1056307 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1056310 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.068 05:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.068 rmmod nvme_tcp 00:09:27.327 rmmod nvme_fabrics 00:09:27.327 rmmod nvme_keyring 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1056073 ']' 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1056073 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1056073 ']' 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1056073 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1056073 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1056073' 00:09:27.327 killing process with pid 1056073 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1056073 00:09:27.327 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1056073 00:09:27.585 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.585 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.585 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.585 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:27.585 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:27.585 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.585 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.585 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.585 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.585 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.585 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.586 05:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.491 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.491 00:09:29.491 real 0m10.800s 00:09:29.491 user 0m16.672s 00:09:29.491 sys 0m6.148s 00:09:29.491 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.491 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:29.491 ************************************ 00:09:29.491 END TEST nvmf_bdev_io_wait 00:09:29.491 ************************************ 00:09:29.491 05:34:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:29.491 05:34:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.492 05:34:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.492 05:34:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.492 ************************************ 00:09:29.492 START TEST nvmf_queue_depth 00:09:29.492 ************************************ 00:09:29.492 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:29.752 * Looking for test storage... 00:09:29.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.752 --rc genhtml_branch_coverage=1 00:09:29.752 --rc genhtml_function_coverage=1 00:09:29.752 --rc genhtml_legend=1 00:09:29.752 --rc geninfo_all_blocks=1 00:09:29.752 --rc geninfo_unexecuted_blocks=1 00:09:29.752 00:09:29.752 ' 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.752 --rc genhtml_branch_coverage=1 00:09:29.752 --rc genhtml_function_coverage=1 00:09:29.752 --rc genhtml_legend=1 00:09:29.752 --rc geninfo_all_blocks=1 00:09:29.752 --rc geninfo_unexecuted_blocks=1 00:09:29.752 00:09:29.752 ' 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.752 --rc genhtml_branch_coverage=1 00:09:29.752 --rc genhtml_function_coverage=1 00:09:29.752 --rc genhtml_legend=1 00:09:29.752 --rc geninfo_all_blocks=1 00:09:29.752 --rc geninfo_unexecuted_blocks=1 00:09:29.752 00:09:29.752 ' 00:09:29.752 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.752 --rc genhtml_branch_coverage=1 00:09:29.752 --rc genhtml_function_coverage=1 00:09:29.753 --rc genhtml_legend=1 00:09:29.753 --rc geninfo_all_blocks=1 00:09:29.753 --rc geninfo_unexecuted_blocks=1 00:09:29.753 00:09:29.753 ' 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.753 05:34:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:36.324 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.324 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:36.325 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:36.325 Found net devices under 0000:af:00.0: cvl_0_0 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:36.325 Found net devices under 0000:af:00.1: cvl_0_1 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:36.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:09:36.325 00:09:36.325 --- 10.0.0.2 ping statistics --- 00:09:36.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.325 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:09:36.325 00:09:36.325 --- 10.0.0.1 ping statistics --- 00:09:36.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.325 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1060040 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1060040 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1060040 ']' 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.325 [2024-12-10 05:34:23.629227] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:09:36.325 [2024-12-10 05:34:23.629280] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.325 [2024-12-10 05:34:23.709047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.325 [2024-12-10 05:34:23.748781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.325 [2024-12-10 05:34:23.748814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.325 [2024-12-10 05:34:23.748822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.325 [2024-12-10 05:34:23.748827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.325 [2024-12-10 05:34:23.748832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.325 [2024-12-10 05:34:23.749335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:36.325 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.326 [2024-12-10 05:34:23.881438] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.326 Malloc0 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.326 [2024-12-10 05:34:23.931615] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1060232 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1060232 /var/tmp/bdevperf.sock 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1060232 ']' 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:36.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.326 05:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.326 [2024-12-10 05:34:23.983229] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:09:36.326 [2024-12-10 05:34:23.983271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060232 ] 00:09:36.326 [2024-12-10 05:34:24.058459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.326 [2024-12-10 05:34:24.097593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.326 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.326 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:36.326 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:36.326 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.326 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.585 NVMe0n1 00:09:36.585 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.585 05:34:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:36.844 Running I/O for 10 seconds... 00:09:38.724 11742.00 IOPS, 45.87 MiB/s [2024-12-10T04:34:27.581Z] 12117.00 IOPS, 47.33 MiB/s [2024-12-10T04:34:28.634Z] 12281.67 IOPS, 47.98 MiB/s [2024-12-10T04:34:29.571Z] 12292.75 IOPS, 48.02 MiB/s [2024-12-10T04:34:30.947Z] 12366.00 IOPS, 48.30 MiB/s [2024-12-10T04:34:31.883Z] 12412.50 IOPS, 48.49 MiB/s [2024-12-10T04:34:32.820Z] 12419.14 IOPS, 48.51 MiB/s [2024-12-10T04:34:33.757Z] 12478.75 IOPS, 48.75 MiB/s [2024-12-10T04:34:34.693Z] 12497.00 IOPS, 48.82 MiB/s [2024-12-10T04:34:34.693Z] 12481.90 IOPS, 48.76 MiB/s 00:09:46.797 Latency(us) 00:09:46.797 [2024-12-10T04:34:34.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.797 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:46.797 Verification LBA range: start 0x0 length 0x4000 00:09:46.797 NVMe0n1 : 10.05 12524.44 48.92 0.00 0.00 81504.48 9237.46 53427.44 00:09:46.797 [2024-12-10T04:34:34.693Z] =================================================================================================================== 00:09:46.797 [2024-12-10T04:34:34.693Z] Total : 12524.44 48.92 0.00 0.00 81504.48 9237.46 53427.44 00:09:46.797 { 00:09:46.797 "results": [ 00:09:46.797 { 00:09:46.797 "job": "NVMe0n1", 00:09:46.797 "core_mask": "0x1", 00:09:46.797 "workload": "verify", 00:09:46.797 "status": "finished", 00:09:46.797 "verify_range": { 00:09:46.797 "start": 0, 00:09:46.797 "length": 16384 00:09:46.797 }, 00:09:46.797 "queue_depth": 1024, 00:09:46.797 "io_size": 4096, 00:09:46.797 "runtime": 10.047796, 00:09:46.797 "iops": 12524.438195202212, 00:09:46.797 "mibps": 48.92358670000864, 00:09:46.797 "io_failed": 0, 00:09:46.797 "io_timeout": 0, 00:09:46.797 "avg_latency_us": 81504.47614328208, 00:09:46.797 "min_latency_us": 9237.455238095237, 00:09:46.797 "max_latency_us": 53427.44380952381 00:09:46.797 } 00:09:46.797 ], 00:09:46.797 "core_count": 1 00:09:46.797 } 00:09:46.797 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1060232 00:09:46.797 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1060232 ']' 00:09:46.797 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1060232 00:09:46.797 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:46.797 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.797 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1060232 00:09:46.797 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.797 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.797 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1060232' 00:09:46.797 killing process with pid 1060232 00:09:46.797 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1060232 00:09:46.797 Received shutdown signal, test time was about 10.000000 seconds 00:09:46.797 00:09:46.797 Latency(us) 00:09:46.797 [2024-12-10T04:34:34.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.797 [2024-12-10T04:34:34.693Z] =================================================================================================================== 00:09:46.797 [2024-12-10T04:34:34.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:46.797 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1060232 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.056 rmmod nvme_tcp 00:09:47.056 rmmod nvme_fabrics 00:09:47.056 rmmod nvme_keyring 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1060040 ']' 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1060040 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1060040 ']' 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1060040 00:09:47.056 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:47.057 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.057 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1060040 00:09:47.322 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:47.322 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:47.322 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1060040' 00:09:47.322 killing process with pid 1060040 00:09:47.322 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1060040 00:09:47.322 05:34:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1060040 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.322 05:34:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.860 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.860 00:09:49.860 real 0m19.824s 00:09:49.860 user 0m23.215s 00:09:49.860 sys 0m6.136s 00:09:49.860 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.860 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.860 ************************************ 00:09:49.860 END TEST nvmf_queue_depth 00:09:49.860 ************************************ 00:09:49.860 05:34:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:49.860 05:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:49.860 05:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.860 05:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.860 ************************************ 00:09:49.860 START TEST nvmf_target_multipath 00:09:49.860 ************************************ 00:09:49.860 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:49.860 * Looking for test storage... 00:09:49.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.860 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:49.860 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:49.860 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:49.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.861 --rc genhtml_branch_coverage=1 00:09:49.861 --rc genhtml_function_coverage=1 00:09:49.861 --rc genhtml_legend=1 00:09:49.861 --rc geninfo_all_blocks=1 00:09:49.861 --rc geninfo_unexecuted_blocks=1 00:09:49.861 00:09:49.861 ' 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:49.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.861 --rc genhtml_branch_coverage=1 00:09:49.861 --rc genhtml_function_coverage=1 00:09:49.861 --rc genhtml_legend=1 00:09:49.861 --rc geninfo_all_blocks=1 00:09:49.861 --rc geninfo_unexecuted_blocks=1 00:09:49.861 00:09:49.861 ' 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:49.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.861 --rc genhtml_branch_coverage=1 00:09:49.861 --rc genhtml_function_coverage=1 00:09:49.861 --rc genhtml_legend=1 00:09:49.861 --rc geninfo_all_blocks=1 00:09:49.861 --rc geninfo_unexecuted_blocks=1 00:09:49.861 00:09:49.861 ' 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:49.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.861 --rc genhtml_branch_coverage=1 00:09:49.861 --rc genhtml_function_coverage=1 00:09:49.861 --rc genhtml_legend=1 00:09:49.861 --rc geninfo_all_blocks=1 00:09:49.861 --rc geninfo_unexecuted_blocks=1 00:09:49.861 00:09:49.861 ' 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:49.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:49.861 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:49.862 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.862 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:49.862 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:49.862 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:49.862 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.862 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.862 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.862 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:49.862 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:49.862 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:49.862 05:34:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:56.438 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:56.438 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.438 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:56.439 Found net devices under 0000:af:00.0: cvl_0_0 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:56.439 Found net devices under 0000:af:00.1: cvl_0_1 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:09:56.439 00:09:56.439 --- 10.0.0.2 ping statistics --- 00:09:56.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.439 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:09:56.439 00:09:56.439 --- 10.0.0.1 ping statistics --- 00:09:56.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.439 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:56.439 only one NIC for nvmf test 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.439 rmmod nvme_tcp 00:09:56.439 rmmod nvme_fabrics 00:09:56.439 rmmod nvme_keyring 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.439 05:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.821 00:09:57.821 real 0m8.318s 00:09:57.821 user 0m1.833s 00:09:57.821 sys 0m4.507s 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:57.821 ************************************ 00:09:57.821 END TEST nvmf_target_multipath 00:09:57.821 ************************************ 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.821 ************************************ 00:09:57.821 START TEST nvmf_zcopy 00:09:57.821 ************************************ 00:09:57.821 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:58.081 * Looking for test storage... 00:09:58.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.081 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.082 --rc genhtml_branch_coverage=1 00:09:58.082 --rc genhtml_function_coverage=1 00:09:58.082 --rc genhtml_legend=1 00:09:58.082 --rc geninfo_all_blocks=1 00:09:58.082 --rc geninfo_unexecuted_blocks=1 00:09:58.082 00:09:58.082 ' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.082 --rc genhtml_branch_coverage=1 00:09:58.082 --rc genhtml_function_coverage=1 00:09:58.082 --rc genhtml_legend=1 00:09:58.082 --rc geninfo_all_blocks=1 00:09:58.082 --rc geninfo_unexecuted_blocks=1 00:09:58.082 00:09:58.082 ' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.082 --rc genhtml_branch_coverage=1 00:09:58.082 --rc genhtml_function_coverage=1 00:09:58.082 --rc genhtml_legend=1 00:09:58.082 --rc geninfo_all_blocks=1 00:09:58.082 --rc geninfo_unexecuted_blocks=1 00:09:58.082 00:09:58.082 ' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.082 --rc genhtml_branch_coverage=1 00:09:58.082 --rc genhtml_function_coverage=1 00:09:58.082 --rc genhtml_legend=1 00:09:58.082 --rc geninfo_all_blocks=1 00:09:58.082 --rc geninfo_unexecuted_blocks=1 00:09:58.082 00:09:58.082 ' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:58.082 05:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:04.653 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:04.654 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:04.654 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:04.654 Found net devices under 0000:af:00.0: cvl_0_0 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:04.654 Found net devices under 0000:af:00.1: cvl_0_1 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:04.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:10:04.654 00:10:04.654 --- 10.0.0.2 ping statistics --- 00:10:04.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.654 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:04.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:10:04.654 00:10:04.654 --- 10.0.0.1 ping statistics --- 00:10:04.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.654 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1069007 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1069007 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1069007 ']' 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.654 05:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.654 [2024-12-10 05:34:51.962518] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:10:04.654 [2024-12-10 05:34:51.962572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.654 [2024-12-10 05:34:52.040841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.654 [2024-12-10 05:34:52.080072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.654 [2024-12-10 05:34:52.080105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.654 [2024-12-10 05:34:52.080112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.654 [2024-12-10 05:34:52.080118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.654 [2024-12-10 05:34:52.080124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.654 [2024-12-10 05:34:52.080593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.654 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.654 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:04.654 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.654 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.654 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.655 [2024-12-10 05:34:52.214675] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.655 [2024-12-10 05:34:52.234852] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.655 malloc0 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:04.655 { 00:10:04.655 "params": { 00:10:04.655 "name": "Nvme$subsystem", 00:10:04.655 "trtype": "$TEST_TRANSPORT", 00:10:04.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.655 "adrfam": "ipv4", 00:10:04.655 "trsvcid": "$NVMF_PORT", 00:10:04.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.655 "hdgst": ${hdgst:-false}, 00:10:04.655 "ddgst": ${ddgst:-false} 00:10:04.655 }, 00:10:04.655 "method": "bdev_nvme_attach_controller" 00:10:04.655 } 00:10:04.655 EOF 00:10:04.655 )") 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:04.655 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:04.655 "params": { 00:10:04.655 "name": "Nvme1", 00:10:04.655 "trtype": "tcp", 00:10:04.655 "traddr": "10.0.0.2", 00:10:04.655 "adrfam": "ipv4", 00:10:04.655 "trsvcid": "4420", 00:10:04.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.655 "hdgst": false, 00:10:04.655 "ddgst": false 00:10:04.655 }, 00:10:04.655 "method": "bdev_nvme_attach_controller" 00:10:04.655 }' 00:10:04.655 [2024-12-10 05:34:52.314543] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:10:04.655 [2024-12-10 05:34:52.314584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069034 ] 00:10:04.655 [2024-12-10 05:34:52.386654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.655 [2024-12-10 05:34:52.426094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.914 Running I/O for 10 seconds... 00:10:06.787 8741.00 IOPS, 68.29 MiB/s [2024-12-10T04:34:56.065Z] 8802.00 IOPS, 68.77 MiB/s [2024-12-10T04:34:57.002Z] 8832.00 IOPS, 69.00 MiB/s [2024-12-10T04:34:57.939Z] 8850.25 IOPS, 69.14 MiB/s [2024-12-10T04:34:58.874Z] 8853.80 IOPS, 69.17 MiB/s [2024-12-10T04:34:59.810Z] 8856.17 IOPS, 69.19 MiB/s [2024-12-10T04:35:00.747Z] 8865.00 IOPS, 69.26 MiB/s [2024-12-10T04:35:01.681Z] 8848.50 IOPS, 69.13 MiB/s [2024-12-10T04:35:03.060Z] 8847.11 IOPS, 69.12 MiB/s [2024-12-10T04:35:03.060Z] 8852.10 IOPS, 69.16 MiB/s 00:10:15.164 Latency(us) 00:10:15.164 [2024-12-10T04:35:03.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.164 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:15.164 Verification LBA range: start 0x0 length 0x1000 00:10:15.164 Nvme1n1 : 10.01 8852.49 69.16 0.00 0.00 14418.11 1958.28 23717.79 00:10:15.164 [2024-12-10T04:35:03.060Z] =================================================================================================================== 00:10:15.164 [2024-12-10T04:35:03.060Z] Total : 8852.49 69.16 0.00 0.00 14418.11 1958.28 23717.79 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1070818 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.164 { 00:10:15.164 "params": { 00:10:15.164 "name": "Nvme$subsystem", 00:10:15.164 "trtype": "$TEST_TRANSPORT", 00:10:15.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.164 "adrfam": "ipv4", 00:10:15.164 "trsvcid": "$NVMF_PORT", 00:10:15.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.164 "hdgst": ${hdgst:-false}, 00:10:15.164 "ddgst": ${ddgst:-false} 00:10:15.164 }, 00:10:15.164 "method": "bdev_nvme_attach_controller" 00:10:15.164 } 00:10:15.164 EOF 00:10:15.164 )") 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:15.164 [2024-12-10 05:35:02.817827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.817859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:15.164 05:35:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.164 "params": { 00:10:15.164 "name": "Nvme1", 00:10:15.164 "trtype": "tcp", 00:10:15.164 "traddr": "10.0.0.2", 00:10:15.164 "adrfam": "ipv4", 00:10:15.164 "trsvcid": "4420", 00:10:15.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.164 "hdgst": false, 00:10:15.164 "ddgst": false 00:10:15.164 }, 00:10:15.164 "method": "bdev_nvme_attach_controller" 00:10:15.164 }' 00:10:15.164 [2024-12-10 05:35:02.829830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.829843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.841856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.841867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.853888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.853900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.855953] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:10:15.164 [2024-12-10 05:35:02.855995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070818 ] 00:10:15.164 [2024-12-10 05:35:02.865919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.865934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.877950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.877960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.889984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.889994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.902016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.902026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.914048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.914058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.926082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.926091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.929965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.164 [2024-12-10 05:35:02.938113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.938125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.950145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.950160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.962180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.164 [2024-12-10 05:35:02.962191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.164 [2024-12-10 05:35:02.969625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.164 [2024-12-10 05:35:02.974211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.165 [2024-12-10 05:35:02.974223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.165 [2024-12-10 05:35:02.986257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.165 [2024-12-10 05:35:02.986277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.165 [2024-12-10 05:35:02.998289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.165 [2024-12-10 05:35:02.998314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.165 [2024-12-10 05:35:03.010311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.165 [2024-12-10 05:35:03.010325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.165 [2024-12-10 05:35:03.022337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.165 [2024-12-10 05:35:03.022351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.165 [2024-12-10 05:35:03.034370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.165 [2024-12-10 05:35:03.034385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.165 [2024-12-10 05:35:03.046398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.165 [2024-12-10 05:35:03.046409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.058432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.058442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.070500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.070521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.082506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.082525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.094538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.094551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.106564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.106575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.118591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.118601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.130632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.130646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.142666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.142683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.154695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.154706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.166726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.166736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.178756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.178765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.190794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.190808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.202823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.202834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.214856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.214865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.226892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.226905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.238919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.238929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.250950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.250962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.262982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.262993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.275020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.275034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.287057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.287076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 Running I/O for 5 seconds... 00:10:15.424 [2024-12-10 05:35:03.299085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.299097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.424 [2024-12-10 05:35:03.314615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.424 [2024-12-10 05:35:03.314641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.323517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.323536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.337725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.337745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.346453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.346473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.360809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.360829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.375073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.375094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.389108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.389128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.399845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.399866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.414101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.414120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.427708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.427727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.436384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.436404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.450438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.450458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.463799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.463818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.477355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.477378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.490759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.683 [2024-12-10 05:35:03.490779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.683 [2024-12-10 05:35:03.504321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-12-10 05:35:03.504341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-12-10 05:35:03.517410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-12-10 05:35:03.517429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-12-10 05:35:03.531188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-12-10 05:35:03.531208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-12-10 05:35:03.545103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-12-10 05:35:03.545123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-12-10 05:35:03.558495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-12-10 05:35:03.558514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-12-10 05:35:03.572089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-12-10 05:35:03.572109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.581013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.581032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.595313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.595333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.608972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.608991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.622776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.622795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.636415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.636434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.650027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.650047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.663543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.663562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.677223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.677248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.691162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.691187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.704693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.704713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.718228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.718248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.732353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.732371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.741132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.741150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.755340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.755371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.768952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.768974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.782807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.782826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.796031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.796049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.809447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.809465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.942 [2024-12-10 05:35:03.823336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.942 [2024-12-10 05:35:03.823356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.200 [2024-12-10 05:35:03.836598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.200 [2024-12-10 05:35:03.836617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.200 [2024-12-10 05:35:03.850576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.200 [2024-12-10 05:35:03.850595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.200 [2024-12-10 05:35:03.864395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.200 [2024-12-10 05:35:03.864414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.200 [2024-12-10 05:35:03.877904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.200 [2024-12-10 05:35:03.877923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.200 [2024-12-10 05:35:03.891298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.200 [2024-12-10 05:35:03.891317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.200 [2024-12-10 05:35:03.904771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.200 [2024-12-10 05:35:03.904790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.200 [2024-12-10 05:35:03.918466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.200 [2024-12-10 05:35:03.918485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.200 [2024-12-10 05:35:03.932035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.200 [2024-12-10 05:35:03.932054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.200 [2024-12-10 05:35:03.945673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.200 [2024-12-10 05:35:03.945692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.200 [2024-12-10 05:35:03.959444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.201 [2024-12-10 05:35:03.959464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.201 [2024-12-10 05:35:03.972899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.201 [2024-12-10 05:35:03.972920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.201 [2024-12-10 05:35:03.986578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.201 [2024-12-10 05:35:03.986598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.201 [2024-12-10 05:35:04.000515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.201 [2024-12-10 05:35:04.000534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.201 [2024-12-10 05:35:04.009437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.201 [2024-12-10 05:35:04.009456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.201 [2024-12-10 05:35:04.023170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.201 [2024-12-10 05:35:04.023206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.201 [2024-12-10 05:35:04.037180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.201 [2024-12-10 05:35:04.037198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.201 [2024-12-10 05:35:04.045979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.201 [2024-12-10 05:35:04.045998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.201 [2024-12-10 05:35:04.060003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.201 [2024-12-10 05:35:04.060023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.201 [2024-12-10 05:35:04.073711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.201 [2024-12-10 05:35:04.073730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.201 [2024-12-10 05:35:04.087541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.201 [2024-12-10 05:35:04.087560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.101422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.101441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.114936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.114955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.128413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.128431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.141724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.141743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.155141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.155172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.168966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.168988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.182643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.182662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.196163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.196189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.209783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.209806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.223617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.223636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.237194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.237229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.250736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.250755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.264486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.264505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.277840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.277859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.291465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.291484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 17059.00 IOPS, 133.27 MiB/s [2024-12-10T04:35:04.355Z] [2024-12-10 05:35:04.305121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.305149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.318457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.318476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.332371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.332390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.459 [2024-12-10 05:35:04.345884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.459 [2024-12-10 05:35:04.345903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.718 [2024-12-10 05:35:04.359522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.718 [2024-12-10 05:35:04.359541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.718 [2024-12-10 05:35:04.373233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.718 [2024-12-10 05:35:04.373251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.718 [2024-12-10 05:35:04.387069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.718 [2024-12-10 05:35:04.387089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.718 [2024-12-10 05:35:04.400622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.718 [2024-12-10 05:35:04.400641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.718 [2024-12-10 05:35:04.414220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.718 [2024-12-10 05:35:04.414238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.718 [2024-12-10 05:35:04.427848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.718 [2024-12-10 05:35:04.427867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.718 [2024-12-10 05:35:04.441631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.718 [2024-12-10 05:35:04.441650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.718 [2024-12-10 05:35:04.450416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.718 [2024-12-10 05:35:04.450435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.718 [2024-12-10 05:35:04.459855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.718 [2024-12-10 05:35:04.459875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.718 [2024-12-10 05:35:04.474185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.718 [2024-12-10 05:35:04.474203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.718 [2024-12-10 05:35:04.487777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.718 [2024-12-10 05:35:04.487796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.719 [2024-12-10 05:35:04.501445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.719 [2024-12-10 05:35:04.501464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.719 [2024-12-10 05:35:04.515430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.719 [2024-12-10 05:35:04.515449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.719 [2024-12-10 05:35:04.529172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.719 [2024-12-10 05:35:04.529190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.719 [2024-12-10 05:35:04.543011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.719 [2024-12-10 05:35:04.543030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.719 [2024-12-10 05:35:04.556700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.719 [2024-12-10 05:35:04.556722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.719 [2024-12-10 05:35:04.570402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.719 [2024-12-10 05:35:04.570421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.719 [2024-12-10 05:35:04.579389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.719 [2024-12-10 05:35:04.579408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.719 [2024-12-10 05:35:04.593177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.719 [2024-12-10 05:35:04.593196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.719 [2024-12-10 05:35:04.606946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.719 [2024-12-10 05:35:04.606966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.621375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.621395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.630299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.630319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.644609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.644630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.653335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.653354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.667202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.667222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.680598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.680617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.694068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.694087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.707368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.707387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.720840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.720860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.734197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.734217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.748248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.748267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.762014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.762033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.775371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.775390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.789114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.789133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.802708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.802732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.816177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.816197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.829833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.829853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.843525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.843544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.978 [2024-12-10 05:35:04.857477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.978 [2024-12-10 05:35:04.857497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.237 [2024-12-10 05:35:04.871809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.237 [2024-12-10 05:35:04.871830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.237 [2024-12-10 05:35:04.887349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.237 [2024-12-10 05:35:04.887368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.237 [2024-12-10 05:35:04.901478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.237 [2024-12-10 05:35:04.901498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.237 [2024-12-10 05:35:04.914781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.237 [2024-12-10 05:35:04.914799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.237 [2024-12-10 05:35:04.928938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.237 [2024-12-10 05:35:04.928958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.237 [2024-12-10 05:35:04.942790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.237 [2024-12-10 05:35:04.942810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.237 [2024-12-10 05:35:04.956635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.237 [2024-12-10 05:35:04.956655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.237 [2024-12-10 05:35:04.965491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.237 [2024-12-10 05:35:04.965509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.237 [2024-12-10 05:35:04.974768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:04.974786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.238 [2024-12-10 05:35:04.989128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:04.989147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.238 [2024-12-10 05:35:05.002869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:05.002888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.238 [2024-12-10 05:35:05.016454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:05.016473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.238 [2024-12-10 05:35:05.029885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:05.029903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.238 [2024-12-10 05:35:05.043494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:05.043512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.238 [2024-12-10 05:35:05.057237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:05.057261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.238 [2024-12-10 05:35:05.070833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:05.070852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.238 [2024-12-10 05:35:05.084705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:05.084725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.238 [2024-12-10 05:35:05.097972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:05.097991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.238 [2024-12-10 05:35:05.111825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:05.111844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.238 [2024-12-10 05:35:05.125412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.238 [2024-12-10 05:35:05.125432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.139348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.139368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.152813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.152832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.166467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.166486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.175147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.175172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.184404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.184423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.198662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.198684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.212357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.212376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.225879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.225898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.239720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.239739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.253325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.253344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.266865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.266885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.280178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.280197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.294056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.294076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 17092.50 IOPS, 133.54 MiB/s [2024-12-10T04:35:05.394Z] [2024-12-10 05:35:05.307622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.307641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.321119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.321139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.334890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.334910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.348650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.348671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.362523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.362543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.498 [2024-12-10 05:35:05.375978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.498 [2024-12-10 05:35:05.375998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.756 [2024-12-10 05:35:05.389902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.389921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.403510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.403530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.417250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.417270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.430822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.430841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.443900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.443920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.457449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.457468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.470714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.470734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.484712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.484731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.495580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.495599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.509694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.509713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.518447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.518466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.532113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.532132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.545613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.545632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.558975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.558994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.572567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.572586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.586301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.586320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.599622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.599640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.613004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.613023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.626818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.626837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.757 [2024-12-10 05:35:05.640388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.757 [2024-12-10 05:35:05.640407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.654134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.654153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.667684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.667703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.681275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.681293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.695037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.695056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.708600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.708619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.722217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.722235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.735811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.735830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.749537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.749556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.763049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.763068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.776506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.776526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.790357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.790376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.803628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.803657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.817496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.817515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.831240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.831259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.844941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.844961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.858598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.858617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.867463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.867481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.882416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.882435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.016 [2024-12-10 05:35:05.898087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.016 [2024-12-10 05:35:05.898106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:05.912361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:05.912381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:05.925557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:05.925576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:05.939905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:05.939925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:05.953594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:05.953613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:05.966787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:05.966808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:05.980419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:05.980439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:05.993625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:05.993646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.006961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.006982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.020721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.020741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.034052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.034071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.043491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.043511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.057633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.057656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.071053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.071073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.084991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.085011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.096008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.096027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.110306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.110325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.123968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.123988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.137939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.137959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.151500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.275 [2024-12-10 05:35:06.151519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.275 [2024-12-10 05:35:06.160263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.276 [2024-12-10 05:35:06.160283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.174862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.174882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.188468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.188487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.201941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.201960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.215600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.215621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.229063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.229083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.238532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.238551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.248119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.248137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.262012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.262032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.275769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.275788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.289629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.289649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.299041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.299064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 17105.67 IOPS, 133.64 MiB/s [2024-12-10T04:35:06.431Z] [2024-12-10 05:35:06.313045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.313064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.326772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.326791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.340447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.340465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.353792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.353812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.367494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.367513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.380883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.380902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.394458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.394477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.408615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.408635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.535 [2024-12-10 05:35:06.422338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.535 [2024-12-10 05:35:06.422358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.794 [2024-12-10 05:35:06.436275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.794 [2024-12-10 05:35:06.436294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.449887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.449906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.458713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.458732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.473178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.473198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.486489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.486509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.499789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.499808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.513946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.513964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.527917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.527936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.541647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.541666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.550407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.550426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.564717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.564736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.578481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.578500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.592440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.592459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.603272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.603290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.612435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.612454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.621537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.621555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.635752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.635773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.649323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.649343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.663032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.663054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.795 [2024-12-10 05:35:06.676949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.795 [2024-12-10 05:35:06.676968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.690844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.690862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.699559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.699578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.713459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.713478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.727124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.727143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.740360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.740379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.749157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.749184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.763230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.763248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.776593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.776612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.789933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.789952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.803463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.803482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.816913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.816932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.830798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.830817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.844605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.844624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.857900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.857920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.871637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.871655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.885403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.885433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.899255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.899274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.912543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.912562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.054 [2024-12-10 05:35:06.926276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.054 [2024-12-10 05:35:06.926303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.055 [2024-12-10 05:35:06.939810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.055 [2024-12-10 05:35:06.939829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:06.953375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:06.953394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:06.967192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:06.967211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:06.980948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:06.980967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:06.994577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:06.994597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.007919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.007939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.022023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.022042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.033564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.033583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.047288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.047307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.060613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.060632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.074374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.074393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.088087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.088106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.101592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.101610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.115235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.115254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.128859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.128877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.142758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.142777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.156568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.156586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.170088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.170107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.184154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.184182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.314 [2024-12-10 05:35:07.197720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.314 [2024-12-10 05:35:07.197739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.211588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.211607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.220878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.220897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.234743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.234763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.248270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.248290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.261976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.261996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.275661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.275680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.289339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.289358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.303132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.303152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 17120.00 IOPS, 133.75 MiB/s [2024-12-10T04:35:07.469Z] [2024-12-10 05:35:07.316829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.316849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.330626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.330646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.339416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.339435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.353235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.353256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.362125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.362145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.376226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.376246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.389700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.389720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.403172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.403190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.416892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.416911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.430518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.430537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.444572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.444591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.573 [2024-12-10 05:35:07.458090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.573 [2024-12-10 05:35:07.458110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.472055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.472075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.485815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.485834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.499475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.499495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.513773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.513793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.527683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.527703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.541178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.541203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.555042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.555062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.568491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.568510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.582144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.582163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.595801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.595820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.609302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.609321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.623301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.623321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.636801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.636822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.650122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.650142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.663649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.663669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.677000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.677020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.690608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.690628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.704359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.704378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.832 [2024-12-10 05:35:07.717892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.832 [2024-12-10 05:35:07.717912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.091 [2024-12-10 05:35:07.731716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.731735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.745356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.745375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.758873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.758891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.772550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.772569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.781418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.781437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.795764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.795787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.808892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.808916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.822655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.822674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.831576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.831595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.845708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.845727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.859333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.859353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.873126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.873145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.886965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.886984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.900472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.900492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.914303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.914322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.927826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.927844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.941469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.941487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.955037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.955057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.969059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.969078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.092 [2024-12-10 05:35:07.982444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.092 [2024-12-10 05:35:07.982463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:07.996077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:07.996097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.009562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.009582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.023133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.023152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.037030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.037049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.045766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.045792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.059898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.059917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.073821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.073840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.087604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.087624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.096999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.097018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.106146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.106172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.120045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.120065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.133968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.133988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.147832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.147852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.161489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.161508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.175163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.175187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.188767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.188786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.202586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.202605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.216495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.216514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.351 [2024-12-10 05:35:08.230120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.351 [2024-12-10 05:35:08.230139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.243684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.243704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.257050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.257069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.270903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.270923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.284368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.284388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.298153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.298178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.307502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.307521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 17127.40 IOPS, 133.81 MiB/s 00:10:20.611 Latency(us) 00:10:20.611 [2024-12-10T04:35:08.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.611 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:20.611 Nvme1n1 : 5.01 17134.29 133.86 0.00 0.00 7464.48 3432.84 13981.01 00:10:20.611 [2024-12-10T04:35:08.507Z] =================================================================================================================== 00:10:20.611 [2024-12-10T04:35:08.507Z] Total : 17134.29 133.86 0.00 0.00 7464.48 3432.84 13981.01 00:10:20.611 [2024-12-10 05:35:08.317850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.317868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.329878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.329894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.341925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.341943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.353948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.353967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.365983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.366000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.378011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.378027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.390044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.390061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.402074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.402090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.414101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.414116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.426132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.426144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.438164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.438179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.450198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.450210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.462229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.462239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 [2024-12-10 05:35:08.474261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.611 [2024-12-10 05:35:08.474271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1070818) - No such process 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1070818 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.611 delay0 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.611 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.870 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.870 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:20.870 [2024-12-10 05:35:08.651316] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:27.436 Initializing NVMe Controllers 00:10:27.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:27.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:27.436 Initialization complete. Launching workers. 00:10:27.436 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 267 00:10:27.436 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 554, failed to submit 33 00:10:27.436 success 373, unsuccessful 181, failed 0 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:27.436 rmmod nvme_tcp 00:10:27.436 rmmod nvme_fabrics 00:10:27.436 rmmod nvme_keyring 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1069007 ']' 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1069007 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1069007 ']' 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1069007 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1069007 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1069007' 00:10:27.436 killing process with pid 1069007 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1069007 00:10:27.436 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1069007 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.436 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.343 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:29.343 00:10:29.343 real 0m31.419s 00:10:29.343 user 0m41.937s 00:10:29.343 sys 0m11.030s 00:10:29.343 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.343 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.343 ************************************ 00:10:29.343 END TEST nvmf_zcopy 00:10:29.343 ************************************ 00:10:29.343 05:35:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:29.344 05:35:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.344 05:35:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.344 05:35:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.344 ************************************ 00:10:29.344 START TEST nvmf_nmic 00:10:29.344 ************************************ 00:10:29.344 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:29.344 * Looking for test storage... 00:10:29.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.604 --rc genhtml_branch_coverage=1 00:10:29.604 --rc genhtml_function_coverage=1 00:10:29.604 --rc genhtml_legend=1 00:10:29.604 --rc geninfo_all_blocks=1 00:10:29.604 --rc geninfo_unexecuted_blocks=1 00:10:29.604 00:10:29.604 ' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.604 --rc genhtml_branch_coverage=1 00:10:29.604 --rc genhtml_function_coverage=1 00:10:29.604 --rc genhtml_legend=1 00:10:29.604 --rc geninfo_all_blocks=1 00:10:29.604 --rc geninfo_unexecuted_blocks=1 00:10:29.604 00:10:29.604 ' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.604 --rc genhtml_branch_coverage=1 00:10:29.604 --rc genhtml_function_coverage=1 00:10:29.604 --rc genhtml_legend=1 00:10:29.604 --rc geninfo_all_blocks=1 00:10:29.604 --rc geninfo_unexecuted_blocks=1 00:10:29.604 00:10:29.604 ' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.604 --rc genhtml_branch_coverage=1 00:10:29.604 --rc genhtml_function_coverage=1 00:10:29.604 --rc genhtml_legend=1 00:10:29.604 --rc geninfo_all_blocks=1 00:10:29.604 --rc geninfo_unexecuted_blocks=1 00:10:29.604 00:10:29.604 ' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.604 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:29.605 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:29.605 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:29.605 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.605 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.605 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.605 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:29.605 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:29.605 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:29.605 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:36.178 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:36.178 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:36.178 Found net devices under 0000:af:00.0: cvl_0_0 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:36.178 Found net devices under 0000:af:00.1: cvl_0_1 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:36.178 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.179 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:36.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:10:36.179 00:10:36.179 --- 10.0.0.2 ping statistics --- 00:10:36.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.179 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:10:36.179 00:10:36.179 --- 10.0.0.1 ping statistics --- 00:10:36.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.179 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1076300 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1076300 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1076300 ']' 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.179 [2024-12-10 05:35:23.324289] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:10:36.179 [2024-12-10 05:35:23.324331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.179 [2024-12-10 05:35:23.398607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.179 [2024-12-10 05:35:23.440321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.179 [2024-12-10 05:35:23.440358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.179 [2024-12-10 05:35:23.440365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.179 [2024-12-10 05:35:23.440371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.179 [2024-12-10 05:35:23.440376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.179 [2024-12-10 05:35:23.441723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.179 [2024-12-10 05:35:23.441832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.179 [2024-12-10 05:35:23.441936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.179 [2024-12-10 05:35:23.441937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.179 [2024-12-10 05:35:23.583441] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.179 Malloc0 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.179 [2024-12-10 05:35:23.643887] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:36.179 test case1: single bdev can't be used in multiple subsystems 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.179 [2024-12-10 05:35:23.675823] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:36.179 [2024-12-10 05:35:23.675845] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:36.179 [2024-12-10 05:35:23.675854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.179 request: 00:10:36.179 { 00:10:36.179 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:36.179 "namespace": { 00:10:36.179 "bdev_name": "Malloc0", 00:10:36.179 "no_auto_visible": false, 00:10:36.179 "hide_metadata": false 00:10:36.179 }, 00:10:36.179 "method": "nvmf_subsystem_add_ns", 00:10:36.179 "req_id": 1 00:10:36.179 } 00:10:36.179 Got JSON-RPC error response 00:10:36.179 response: 00:10:36.179 { 00:10:36.179 "code": -32602, 00:10:36.179 "message": "Invalid parameters" 00:10:36.179 } 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:36.179 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:36.179 Adding namespace failed - expected result. 00:10:36.180 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:36.180 test case2: host connect to nvmf target in multiple paths 00:10:36.180 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:36.180 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.180 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.180 [2024-12-10 05:35:23.687945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:36.180 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.180 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:37.116 05:35:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:38.107 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.107 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:38.107 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.107 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:38.107 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:40.108 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:40.108 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:40.108 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.108 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:40.108 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.108 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:40.108 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:40.108 [global] 00:10:40.108 thread=1 00:10:40.108 invalidate=1 00:10:40.108 rw=write 00:10:40.108 time_based=1 00:10:40.108 runtime=1 00:10:40.108 ioengine=libaio 00:10:40.108 direct=1 00:10:40.108 bs=4096 00:10:40.108 iodepth=1 00:10:40.108 norandommap=0 00:10:40.108 numjobs=1 00:10:40.108 00:10:40.108 verify_dump=1 00:10:40.108 verify_backlog=512 00:10:40.108 verify_state_save=0 00:10:40.108 do_verify=1 00:10:40.108 verify=crc32c-intel 00:10:40.108 [job0] 00:10:40.108 filename=/dev/nvme0n1 00:10:40.108 Could not set queue depth (nvme0n1) 00:10:40.367 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.367 fio-3.35 00:10:40.367 Starting 1 thread 00:10:41.743 00:10:41.743 job0: (groupid=0, jobs=1): err= 0: pid=1077292: Tue Dec 10 05:35:29 2024 00:10:41.743 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:41.743 slat (nsec): min=6556, max=38245, avg=7491.07, stdev=1008.17 00:10:41.743 clat (usec): min=154, max=1845, avg=211.90, stdev=42.94 00:10:41.743 lat (usec): min=161, max=1853, avg=219.39, stdev=42.98 00:10:41.743 clat percentiles (usec): 00:10:41.743 | 1.00th=[ 167], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 196], 00:10:41.743 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:10:41.743 | 70.00th=[ 212], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 253], 00:10:41.743 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 314], 99.95th=[ 1106], 00:10:41.743 | 99.99th=[ 1844] 00:10:41.743 write: IOPS=2631, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec); 0 zone resets 00:10:41.743 slat (usec): min=9, max=40647, avg=36.66, stdev=950.45 00:10:41.743 clat (usec): min=102, max=296, avg=124.95, stdev=25.30 00:10:41.743 lat (usec): min=113, max=40858, avg=161.62, stdev=953.24 00:10:41.743 clat percentiles (usec): 00:10:41.744 | 1.00th=[ 106], 5.00th=[ 110], 10.00th=[ 111], 20.00th=[ 113], 00:10:41.744 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 118], 60.00th=[ 120], 00:10:41.744 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 147], 95.00th=[ 159], 00:10:41.744 | 99.00th=[ 243], 99.50th=[ 243], 99.90th=[ 262], 99.95th=[ 281], 00:10:41.744 | 99.99th=[ 297] 00:10:41.744 bw ( KiB/s): min=12151, max=12151, per=100.00%, avg=12151.00, stdev= 0.00, samples=1 00:10:41.744 iops : min= 3037, max= 3037, avg=3037.00, stdev= 0.00, samples=1 00:10:41.744 lat (usec) : 250=96.80%, 500=3.16% 00:10:41.744 lat (msec) : 2=0.04% 00:10:41.744 cpu : usr=2.20%, sys=5.40%, ctx=5198, majf=0, minf=1 00:10:41.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.744 issued rwts: total=2560,2634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.744 00:10:41.744 Run status group 0 (all jobs): 00:10:41.744 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:41.744 WRITE: bw=10.3MiB/s (10.8MB/s), 10.3MiB/s-10.3MiB/s (10.8MB/s-10.8MB/s), io=10.3MiB (10.8MB), run=1001-1001msec 00:10:41.744 00:10:41.744 Disk stats (read/write): 00:10:41.744 nvme0n1: ios=2161/2560, merge=0/0, ticks=1432/301, in_queue=1733, util=99.70% 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.744 rmmod nvme_tcp 00:10:41.744 rmmod nvme_fabrics 00:10:41.744 rmmod nvme_keyring 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1076300 ']' 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1076300 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1076300 ']' 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1076300 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1076300 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1076300' 00:10:41.744 killing process with pid 1076300 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1076300 00:10:41.744 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1076300 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.003 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.538 05:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.538 00:10:44.538 real 0m14.744s 00:10:44.538 user 0m32.713s 00:10:44.538 sys 0m5.402s 00:10:44.538 05:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.538 05:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.538 ************************************ 00:10:44.538 END TEST nvmf_nmic 00:10:44.538 ************************************ 00:10:44.538 05:35:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:44.538 05:35:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.538 05:35:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.538 05:35:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.538 ************************************ 00:10:44.538 START TEST nvmf_fio_target 00:10:44.538 ************************************ 00:10:44.538 05:35:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:44.538 * Looking for test storage... 00:10:44.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.538 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.539 --rc genhtml_branch_coverage=1 00:10:44.539 --rc genhtml_function_coverage=1 00:10:44.539 --rc genhtml_legend=1 00:10:44.539 --rc geninfo_all_blocks=1 00:10:44.539 --rc geninfo_unexecuted_blocks=1 00:10:44.539 00:10:44.539 ' 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.539 --rc genhtml_branch_coverage=1 00:10:44.539 --rc genhtml_function_coverage=1 00:10:44.539 --rc genhtml_legend=1 00:10:44.539 --rc geninfo_all_blocks=1 00:10:44.539 --rc geninfo_unexecuted_blocks=1 00:10:44.539 00:10:44.539 ' 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.539 --rc genhtml_branch_coverage=1 00:10:44.539 --rc genhtml_function_coverage=1 00:10:44.539 --rc genhtml_legend=1 00:10:44.539 --rc geninfo_all_blocks=1 00:10:44.539 --rc geninfo_unexecuted_blocks=1 00:10:44.539 00:10:44.539 ' 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.539 --rc genhtml_branch_coverage=1 00:10:44.539 --rc genhtml_function_coverage=1 00:10:44.539 --rc genhtml_legend=1 00:10:44.539 --rc geninfo_all_blocks=1 00:10:44.539 --rc geninfo_unexecuted_blocks=1 00:10:44.539 00:10:44.539 ' 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.539 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:51.112 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:51.112 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:51.112 Found net devices under 0000:af:00.0: cvl_0_0 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.112 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:51.113 Found net devices under 0000:af:00.1: cvl_0_1 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.113 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:10:51.113 00:10:51.113 --- 10.0.0.2 ping statistics --- 00:10:51.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.113 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:10:51.113 00:10:51.113 --- 10.0.0.1 ping statistics --- 00:10:51.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.113 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1081063 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1081063 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1081063 ']' 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.113 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.113 [2024-12-10 05:35:38.217514] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:10:51.113 [2024-12-10 05:35:38.217558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.113 [2024-12-10 05:35:38.297001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.113 [2024-12-10 05:35:38.337784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.113 [2024-12-10 05:35:38.337822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.113 [2024-12-10 05:35:38.337831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.113 [2024-12-10 05:35:38.337838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.113 [2024-12-10 05:35:38.337842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.113 [2024-12-10 05:35:38.339275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.113 [2024-12-10 05:35:38.339388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.113 [2024-12-10 05:35:38.339494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.113 [2024-12-10 05:35:38.339495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.373 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.373 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:51.373 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:51.373 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.373 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.373 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.373 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:51.373 [2024-12-10 05:35:39.263470] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.632 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.891 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:51.891 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.891 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:51.891 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.150 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:52.150 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.408 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:52.408 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:52.667 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.667 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:52.667 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.926 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:52.926 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.184 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:53.184 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:53.443 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.702 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:53.702 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.702 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:53.702 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.961 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.219 [2024-12-10 05:35:41.922792] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.220 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:54.478 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:54.478 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.855 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:55.855 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:55.855 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.855 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:55.855 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:55.855 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:57.759 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:57.759 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:57.759 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.759 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:57.759 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.759 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:57.759 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:57.759 [global] 00:10:57.759 thread=1 00:10:57.759 invalidate=1 00:10:57.759 rw=write 00:10:57.759 time_based=1 00:10:57.759 runtime=1 00:10:57.759 ioengine=libaio 00:10:57.759 direct=1 00:10:57.759 bs=4096 00:10:57.759 iodepth=1 00:10:57.759 norandommap=0 00:10:57.759 numjobs=1 00:10:57.759 00:10:57.759 verify_dump=1 00:10:57.759 verify_backlog=512 00:10:57.759 verify_state_save=0 00:10:57.759 do_verify=1 00:10:57.759 verify=crc32c-intel 00:10:57.759 [job0] 00:10:57.759 filename=/dev/nvme0n1 00:10:57.759 [job1] 00:10:57.759 filename=/dev/nvme0n2 00:10:57.759 [job2] 00:10:57.759 filename=/dev/nvme0n3 00:10:57.759 [job3] 00:10:57.759 filename=/dev/nvme0n4 00:10:58.017 Could not set queue depth (nvme0n1) 00:10:58.017 Could not set queue depth (nvme0n2) 00:10:58.017 Could not set queue depth (nvme0n3) 00:10:58.017 Could not set queue depth (nvme0n4) 00:10:58.275 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.275 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.275 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.275 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.275 fio-3.35 00:10:58.275 Starting 4 threads 00:10:59.650 00:10:59.650 job0: (groupid=0, jobs=1): err= 0: pid=1082420: Tue Dec 10 05:35:47 2024 00:10:59.650 read: IOPS=2360, BW=9443KiB/s (9669kB/s)(9452KiB/1001msec) 00:10:59.650 slat (nsec): min=7317, max=36057, avg=9429.22, stdev=1686.27 00:10:59.650 clat (usec): min=167, max=521, avg=228.44, stdev=33.52 00:10:59.650 lat (usec): min=177, max=541, avg=237.87, stdev=33.68 00:10:59.650 clat percentiles (usec): 00:10:59.650 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:59.650 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:10:59.650 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:10:59.650 | 99.00th=[ 363], 99.50th=[ 494], 99.90th=[ 519], 99.95th=[ 523], 00:10:59.650 | 99.99th=[ 523] 00:10:59.650 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:59.650 slat (nsec): min=10662, max=38398, avg=12671.87, stdev=2036.08 00:10:59.650 clat (usec): min=116, max=324, avg=152.02, stdev=16.77 00:10:59.650 lat (usec): min=127, max=362, avg=164.69, stdev=16.86 00:10:59.650 clat percentiles (usec): 00:10:59.650 | 1.00th=[ 125], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 139], 00:10:59.651 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:10:59.651 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 186], 00:10:59.651 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 227], 99.95th=[ 233], 00:10:59.651 | 99.99th=[ 326] 00:10:59.651 bw ( KiB/s): min=11624, max=11624, per=48.67%, avg=11624.00, stdev= 0.00, samples=1 00:10:59.651 iops : min= 2906, max= 2906, avg=2906.00, stdev= 0.00, samples=1 00:10:59.651 lat (usec) : 250=92.85%, 500=6.93%, 750=0.22% 00:10:59.651 cpu : usr=4.50%, sys=8.00%, ctx=4926, majf=0, minf=1 00:10:59.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.651 issued rwts: total=2363,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.651 job1: (groupid=0, jobs=1): err= 0: pid=1082434: Tue Dec 10 05:35:47 2024 00:10:59.651 read: IOPS=2316, BW=9267KiB/s (9489kB/s)(9276KiB/1001msec) 00:10:59.651 slat (nsec): min=6700, max=30034, avg=7658.28, stdev=1093.51 00:10:59.651 clat (usec): min=149, max=1025, avg=219.96, stdev=42.80 00:10:59.651 lat (usec): min=169, max=1049, avg=227.62, stdev=42.98 00:10:59.651 clat percentiles (usec): 00:10:59.651 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 192], 00:10:59.651 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 225], 00:10:59.651 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 269], 00:10:59.651 | 99.00th=[ 322], 99.50th=[ 429], 99.90th=[ 791], 99.95th=[ 824], 00:10:59.651 | 99.99th=[ 1029] 00:10:59.651 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:59.651 slat (usec): min=9, max=23123, avg=20.05, stdev=456.80 00:10:59.651 clat (usec): min=108, max=731, avg=159.59, stdev=39.98 00:10:59.651 lat (usec): min=119, max=23395, avg=179.64, stdev=460.76 00:10:59.651 clat percentiles (usec): 00:10:59.651 | 1.00th=[ 115], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 131], 00:10:59.651 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 145], 60.00th=[ 151], 00:10:59.651 | 70.00th=[ 172], 80.00th=[ 190], 90.00th=[ 215], 95.00th=[ 245], 00:10:59.651 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 510], 99.95th=[ 562], 00:10:59.651 | 99.99th=[ 734] 00:10:59.651 bw ( KiB/s): min= 9336, max= 9336, per=39.09%, avg=9336.00, stdev= 0.00, samples=1 00:10:59.651 iops : min= 2334, max= 2334, avg=2334.00, stdev= 0.00, samples=1 00:10:59.651 lat (usec) : 250=89.22%, 500=10.62%, 750=0.10%, 1000=0.04% 00:10:59.651 lat (msec) : 2=0.02% 00:10:59.651 cpu : usr=2.80%, sys=4.40%, ctx=4881, majf=0, minf=2 00:10:59.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.651 issued rwts: total=2319,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.651 job2: (groupid=0, jobs=1): err= 0: pid=1082450: Tue Dec 10 05:35:47 2024 00:10:59.651 read: IOPS=323, BW=1294KiB/s (1325kB/s)(1324KiB/1023msec) 00:10:59.651 slat (nsec): min=7771, max=24307, avg=9943.50, stdev=2157.95 00:10:59.651 clat (usec): min=182, max=41200, avg=2716.37, stdev=9707.63 00:10:59.651 lat (usec): min=192, max=41212, avg=2726.31, stdev=9707.90 00:10:59.651 clat percentiles (usec): 00:10:59.651 | 1.00th=[ 208], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 235], 00:10:59.651 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:10:59.651 | 70.00th=[ 258], 80.00th=[ 277], 90.00th=[ 343], 95.00th=[40633], 00:10:59.651 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:59.651 | 99.99th=[41157] 00:10:59.651 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:10:59.651 slat (nsec): min=11406, max=44030, avg=13473.46, stdev=2326.08 00:10:59.651 clat (usec): min=148, max=308, avg=213.38, stdev=42.94 00:10:59.651 lat (usec): min=161, max=352, avg=226.86, stdev=43.07 00:10:59.651 clat percentiles (usec): 00:10:59.651 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 174], 00:10:59.651 | 30.00th=[ 182], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 217], 00:10:59.651 | 70.00th=[ 245], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 281], 00:10:59.651 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 310], 99.95th=[ 310], 00:10:59.651 | 99.99th=[ 310] 00:10:59.651 bw ( KiB/s): min= 4096, max= 4096, per=17.15%, avg=4096.00, stdev= 0.00, samples=1 00:10:59.651 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:59.651 lat (usec) : 250=65.01%, 500=32.50%, 750=0.12% 00:10:59.651 lat (msec) : 50=2.37% 00:10:59.651 cpu : usr=0.29%, sys=1.86%, ctx=844, majf=0, minf=1 00:10:59.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.651 issued rwts: total=331,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.651 job3: (groupid=0, jobs=1): err= 0: pid=1082456: Tue Dec 10 05:35:47 2024 00:10:59.651 read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec) 00:10:59.651 slat (nsec): min=9748, max=27211, avg=13498.87, stdev=4264.71 00:10:59.651 clat (usec): min=381, max=42207, avg=39374.70, stdev=8511.56 00:10:59.651 lat (usec): min=394, max=42219, avg=39388.20, stdev=8511.75 00:10:59.651 clat percentiles (usec): 00:10:59.651 | 1.00th=[ 383], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:59.651 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:59.651 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:59.651 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:59.651 | 99.99th=[42206] 00:10:59.651 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:59.651 slat (nsec): min=10907, max=43672, avg=12798.51, stdev=2528.76 00:10:59.651 clat (usec): min=139, max=327, avg=221.76, stdev=28.34 00:10:59.651 lat (usec): min=151, max=339, avg=234.56, stdev=28.79 00:10:59.651 clat percentiles (usec): 00:10:59.651 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 190], 00:10:59.651 | 30.00th=[ 202], 40.00th=[ 235], 50.00th=[ 237], 60.00th=[ 239], 00:10:59.651 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 253], 00:10:59.651 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 326], 99.95th=[ 326], 00:10:59.651 | 99.99th=[ 326] 00:10:59.651 bw ( KiB/s): min= 4096, max= 4096, per=17.15%, avg=4096.00, stdev= 0.00, samples=1 00:10:59.651 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:59.651 lat (usec) : 250=90.28%, 500=5.61% 00:10:59.651 lat (msec) : 50=4.11% 00:10:59.651 cpu : usr=0.19%, sys=1.17%, ctx=536, majf=0, minf=1 00:10:59.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.651 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.651 00:10:59.651 Run status group 0 (all jobs): 00:10:59.651 READ: bw=19.1MiB/s (20.0MB/s), 89.4KiB/s-9443KiB/s (91.6kB/s-9669kB/s), io=19.7MiB (20.6MB), run=1001-1029msec 00:10:59.651 WRITE: bw=23.3MiB/s (24.5MB/s), 1990KiB/s-9.99MiB/s (2038kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1029msec 00:10:59.651 00:10:59.651 Disk stats (read/write): 00:10:59.651 nvme0n1: ios=2100/2114, merge=0/0, ticks=795/306, in_queue=1101, util=98.00% 00:10:59.651 nvme0n2: ios=2039/2048, merge=0/0, ticks=889/325, in_queue=1214, util=98.58% 00:10:59.651 nvme0n3: ios=383/512, merge=0/0, ticks=1037/103, in_queue=1140, util=98.34% 00:10:59.651 nvme0n4: ios=76/512, merge=0/0, ticks=1552/105, in_queue=1657, util=98.43% 00:10:59.651 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:59.651 [global] 00:10:59.651 thread=1 00:10:59.651 invalidate=1 00:10:59.651 rw=randwrite 00:10:59.651 time_based=1 00:10:59.651 runtime=1 00:10:59.651 ioengine=libaio 00:10:59.651 direct=1 00:10:59.651 bs=4096 00:10:59.651 iodepth=1 00:10:59.651 norandommap=0 00:10:59.651 numjobs=1 00:10:59.651 00:10:59.651 verify_dump=1 00:10:59.651 verify_backlog=512 00:10:59.651 verify_state_save=0 00:10:59.651 do_verify=1 00:10:59.651 verify=crc32c-intel 00:10:59.651 [job0] 00:10:59.651 filename=/dev/nvme0n1 00:10:59.651 [job1] 00:10:59.651 filename=/dev/nvme0n2 00:10:59.651 [job2] 00:10:59.651 filename=/dev/nvme0n3 00:10:59.651 [job3] 00:10:59.651 filename=/dev/nvme0n4 00:10:59.651 Could not set queue depth (nvme0n1) 00:10:59.651 Could not set queue depth (nvme0n2) 00:10:59.651 Could not set queue depth (nvme0n3) 00:10:59.651 Could not set queue depth (nvme0n4) 00:10:59.651 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.651 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.651 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.651 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.651 fio-3.35 00:10:59.651 Starting 4 threads 00:11:01.027 00:11:01.027 job0: (groupid=0, jobs=1): err= 0: pid=1082907: Tue Dec 10 05:35:48 2024 00:11:01.027 read: IOPS=505, BW=2023KiB/s (2072kB/s)(2096KiB/1036msec) 00:11:01.027 slat (nsec): min=6616, max=26351, avg=8015.02, stdev=2858.26 00:11:01.027 clat (usec): min=181, max=42180, avg=1572.21, stdev=7267.35 00:11:01.027 lat (usec): min=188, max=42187, avg=1580.22, stdev=7269.25 00:11:01.027 clat percentiles (usec): 00:11:01.027 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 221], 00:11:01.027 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:11:01.027 | 70.00th=[ 251], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 314], 00:11:01.027 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:01.027 | 99.99th=[42206] 00:11:01.027 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:11:01.027 slat (nsec): min=6335, max=44047, avg=11024.09, stdev=2250.04 00:11:01.027 clat (usec): min=118, max=373, avg=188.37, stdev=38.51 00:11:01.027 lat (usec): min=130, max=400, avg=199.40, stdev=38.45 00:11:01.027 clat percentiles (usec): 00:11:01.027 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:11:01.027 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 202], 60.00th=[ 210], 00:11:01.027 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 243], 00:11:01.027 | 99.00th=[ 260], 99.50th=[ 277], 99.90th=[ 318], 99.95th=[ 375], 00:11:01.027 | 99.99th=[ 375] 00:11:01.027 bw ( KiB/s): min= 3864, max= 4328, per=20.72%, avg=4096.00, stdev=328.10, samples=2 00:11:01.027 iops : min= 966, max= 1082, avg=1024.00, stdev=82.02, samples=2 00:11:01.027 lat (usec) : 250=87.98%, 500=10.92% 00:11:01.027 lat (msec) : 50=1.10% 00:11:01.027 cpu : usr=0.97%, sys=1.54%, ctx=1551, majf=0, minf=1 00:11:01.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.027 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.027 job1: (groupid=0, jobs=1): err= 0: pid=1082919: Tue Dec 10 05:35:48 2024 00:11:01.027 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:11:01.027 slat (nsec): min=10376, max=23913, avg=22046.82, stdev=2635.68 00:11:01.027 clat (usec): min=40897, max=41505, avg=40987.30, stdev=123.33 00:11:01.027 lat (usec): min=40919, max=41516, avg=41009.34, stdev=120.89 00:11:01.027 clat percentiles (usec): 00:11:01.027 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:01.027 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:01.027 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:01.027 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:01.027 | 99.99th=[41681] 00:11:01.027 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:11:01.027 slat (nsec): min=9756, max=38756, avg=11279.62, stdev=2173.32 00:11:01.027 clat (usec): min=128, max=281, avg=219.77, stdev=15.02 00:11:01.027 lat (usec): min=142, max=308, avg=231.05, stdev=14.89 00:11:01.027 clat percentiles (usec): 00:11:01.027 | 1.00th=[ 174], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:11:01.027 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 223], 00:11:01.027 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 243], 00:11:01.027 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 281], 99.95th=[ 281], 00:11:01.027 | 99.99th=[ 281] 00:11:01.027 bw ( KiB/s): min= 4096, max= 4096, per=20.72%, avg=4096.00, stdev= 0.00, samples=1 00:11:01.027 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:01.027 lat (usec) : 250=94.19%, 500=1.69% 00:11:01.027 lat (msec) : 50=4.12% 00:11:01.027 cpu : usr=0.00%, sys=1.37%, ctx=534, majf=0, minf=2 00:11:01.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.028 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.028 job2: (groupid=0, jobs=1): err= 0: pid=1082936: Tue Dec 10 05:35:48 2024 00:11:01.028 read: IOPS=775, BW=3102KiB/s (3177kB/s)(3180KiB/1025msec) 00:11:01.028 slat (nsec): min=7386, max=26170, avg=8840.65, stdev=2645.28 00:11:01.028 clat (usec): min=187, max=42033, avg=1047.52, stdev=5704.83 00:11:01.028 lat (usec): min=195, max=42042, avg=1056.36, stdev=5705.44 00:11:01.028 clat percentiles (usec): 00:11:01.028 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:11:01.028 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:11:01.028 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:11:01.028 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:01.028 | 99.99th=[42206] 00:11:01.028 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:11:01.028 slat (nsec): min=10631, max=52671, avg=12049.46, stdev=2447.00 00:11:01.028 clat (usec): min=125, max=660, avg=162.96, stdev=21.76 00:11:01.028 lat (usec): min=137, max=671, avg=175.01, stdev=22.02 00:11:01.028 clat percentiles (usec): 00:11:01.028 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:11:01.028 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:11:01.028 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 190], 00:11:01.028 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 314], 99.95th=[ 660], 00:11:01.028 | 99.99th=[ 660] 00:11:01.028 bw ( KiB/s): min= 8192, max= 8192, per=41.44%, avg=8192.00, stdev= 0.00, samples=1 00:11:01.028 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:01.028 lat (usec) : 250=93.29%, 500=5.77%, 750=0.05% 00:11:01.028 lat (msec) : 50=0.88% 00:11:01.028 cpu : usr=1.56%, sys=2.73%, ctx=1820, majf=0, minf=1 00:11:01.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.028 issued rwts: total=795,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.028 job3: (groupid=0, jobs=1): err= 0: pid=1082941: Tue Dec 10 05:35:48 2024 00:11:01.028 read: IOPS=2312, BW=9251KiB/s (9473kB/s)(9260KiB/1001msec) 00:11:01.028 slat (nsec): min=2288, max=35535, avg=8176.09, stdev=1804.54 00:11:01.028 clat (usec): min=166, max=496, avg=224.87, stdev=35.37 00:11:01.028 lat (usec): min=174, max=504, avg=233.05, stdev=35.36 00:11:01.028 clat percentiles (usec): 00:11:01.028 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:11:01.028 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 227], 00:11:01.028 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 281], 00:11:01.028 | 99.00th=[ 343], 99.50th=[ 453], 99.90th=[ 478], 99.95th=[ 486], 00:11:01.028 | 99.99th=[ 498] 00:11:01.028 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:01.028 slat (nsec): min=3188, max=43509, avg=11088.56, stdev=3368.38 00:11:01.028 clat (usec): min=107, max=278, avg=163.49, stdev=32.16 00:11:01.028 lat (usec): min=111, max=293, avg=174.58, stdev=33.28 00:11:01.028 clat percentiles (usec): 00:11:01.028 | 1.00th=[ 121], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 139], 00:11:01.028 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 161], 00:11:01.028 | 70.00th=[ 169], 80.00th=[ 198], 90.00th=[ 219], 95.00th=[ 229], 00:11:01.028 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 258], 99.95th=[ 265], 00:11:01.028 | 99.99th=[ 281] 00:11:01.028 bw ( KiB/s): min=10376, max=10376, per=52.49%, avg=10376.00, stdev= 0.00, samples=1 00:11:01.028 iops : min= 2594, max= 2594, avg=2594.00, stdev= 0.00, samples=1 00:11:01.028 lat (usec) : 250=90.81%, 500=9.19% 00:11:01.028 cpu : usr=3.90%, sys=7.30%, ctx=4876, majf=0, minf=1 00:11:01.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.028 issued rwts: total=2315,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.028 00:11:01.028 Run status group 0 (all jobs): 00:11:01.028 READ: bw=13.8MiB/s (14.5MB/s), 86.1KiB/s-9251KiB/s (88.2kB/s-9473kB/s), io=14.3MiB (15.0MB), run=1001-1036msec 00:11:01.028 WRITE: bw=19.3MiB/s (20.2MB/s), 2004KiB/s-9.99MiB/s (2052kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1036msec 00:11:01.028 00:11:01.028 Disk stats (read/write): 00:11:01.028 nvme0n1: ios=549/1024, merge=0/0, ticks=1330/194, in_queue=1524, util=99.40% 00:11:01.028 nvme0n2: ios=51/512, merge=0/0, ticks=714/110, in_queue=824, util=87.61% 00:11:01.028 nvme0n3: ios=820/1024, merge=0/0, ticks=1045/154, in_queue=1199, util=98.86% 00:11:01.028 nvme0n4: ios=2061/2048, merge=0/0, ticks=1377/307, in_queue=1684, util=97.38% 00:11:01.028 05:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:01.028 [global] 00:11:01.028 thread=1 00:11:01.028 invalidate=1 00:11:01.028 rw=write 00:11:01.028 time_based=1 00:11:01.028 runtime=1 00:11:01.028 ioengine=libaio 00:11:01.028 direct=1 00:11:01.028 bs=4096 00:11:01.028 iodepth=128 00:11:01.028 norandommap=0 00:11:01.028 numjobs=1 00:11:01.028 00:11:01.028 verify_dump=1 00:11:01.028 verify_backlog=512 00:11:01.028 verify_state_save=0 00:11:01.028 do_verify=1 00:11:01.028 verify=crc32c-intel 00:11:01.028 [job0] 00:11:01.028 filename=/dev/nvme0n1 00:11:01.028 [job1] 00:11:01.028 filename=/dev/nvme0n2 00:11:01.028 [job2] 00:11:01.028 filename=/dev/nvme0n3 00:11:01.028 [job3] 00:11:01.028 filename=/dev/nvme0n4 00:11:01.028 Could not set queue depth (nvme0n1) 00:11:01.028 Could not set queue depth (nvme0n2) 00:11:01.028 Could not set queue depth (nvme0n3) 00:11:01.028 Could not set queue depth (nvme0n4) 00:11:01.287 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.287 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.287 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.287 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.287 fio-3.35 00:11:01.287 Starting 4 threads 00:11:02.675 00:11:02.675 job0: (groupid=0, jobs=1): err= 0: pid=1083334: Tue Dec 10 05:35:50 2024 00:11:02.675 read: IOPS=5565, BW=21.7MiB/s (22.8MB/s)(22.0MiB/1012msec) 00:11:02.675 slat (nsec): min=1073, max=13371k, avg=87363.60, stdev=647145.61 00:11:02.675 clat (usec): min=3313, max=29562, avg=11148.59, stdev=3194.89 00:11:02.675 lat (usec): min=3318, max=29576, avg=11235.95, stdev=3238.16 00:11:02.675 clat percentiles (usec): 00:11:02.675 | 1.00th=[ 3949], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[ 9372], 00:11:02.675 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10421], 00:11:02.675 | 70.00th=[11338], 80.00th=[13042], 90.00th=[16581], 95.00th=[18220], 00:11:02.675 | 99.00th=[20841], 99.50th=[22938], 99.90th=[25560], 99.95th=[25560], 00:11:02.675 | 99.99th=[29492] 00:11:02.675 write: IOPS=5950, BW=23.2MiB/s (24.4MB/s)(23.5MiB/1012msec); 0 zone resets 00:11:02.675 slat (usec): min=2, max=10640, avg=76.34, stdev=500.15 00:11:02.675 clat (usec): min=548, max=54473, avg=10900.36, stdev=6678.95 00:11:02.675 lat (usec): min=554, max=54478, avg=10976.69, stdev=6716.22 00:11:02.675 clat percentiles (usec): 00:11:02.675 | 1.00th=[ 2147], 5.00th=[ 4293], 10.00th=[ 5145], 20.00th=[ 7504], 00:11:02.675 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:11:02.675 | 70.00th=[10290], 80.00th=[11994], 90.00th=[17171], 95.00th=[20317], 00:11:02.676 | 99.00th=[50070], 99.50th=[52167], 99.90th=[53740], 99.95th=[54264], 00:11:02.676 | 99.99th=[54264] 00:11:02.676 bw ( KiB/s): min=22008, max=25144, per=31.19%, avg=23576.00, stdev=2217.49, samples=2 00:11:02.676 iops : min= 5502, max= 6286, avg=5894.00, stdev=554.37, samples=2 00:11:02.676 lat (usec) : 750=0.03% 00:11:02.676 lat (msec) : 2=0.42%, 4=2.22%, 10=50.33%, 20=43.76%, 50=2.69% 00:11:02.676 lat (msec) : 100=0.53% 00:11:02.676 cpu : usr=4.06%, sys=5.14%, ctx=593, majf=0, minf=1 00:11:02.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:02.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.676 issued rwts: total=5632,6022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.676 job1: (groupid=0, jobs=1): err= 0: pid=1083335: Tue Dec 10 05:35:50 2024 00:11:02.676 read: IOPS=4120, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1009msec) 00:11:02.676 slat (nsec): min=1100, max=24708k, avg=129680.98, stdev=976070.43 00:11:02.676 clat (usec): min=3658, max=65058, avg=15432.82, stdev=9905.59 00:11:02.676 lat (usec): min=3664, max=65069, avg=15562.50, stdev=9982.72 00:11:02.676 clat percentiles (usec): 00:11:02.676 | 1.00th=[ 4293], 5.00th=[ 7177], 10.00th=[ 8455], 20.00th=[ 9372], 00:11:02.676 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11863], 60.00th=[12911], 00:11:02.676 | 70.00th=[16319], 80.00th=[20055], 90.00th=[28443], 95.00th=[40109], 00:11:02.676 | 99.00th=[52167], 99.50th=[62653], 99.90th=[65274], 99.95th=[65274], 00:11:02.676 | 99.99th=[65274] 00:11:02.676 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:11:02.676 slat (nsec): min=1955, max=10594k, avg=90617.98, stdev=489684.93 00:11:02.676 clat (usec): min=313, max=65023, avg=13798.11, stdev=10170.81 00:11:02.676 lat (usec): min=532, max=66382, avg=13888.73, stdev=10215.37 00:11:02.676 clat percentiles (usec): 00:11:02.676 | 1.00th=[ 1500], 5.00th=[ 3589], 10.00th=[ 5538], 20.00th=[ 8029], 00:11:02.676 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10159], 60.00th=[11469], 00:11:02.676 | 70.00th=[13829], 80.00th=[20055], 90.00th=[22414], 95.00th=[31589], 00:11:02.676 | 99.00th=[56361], 99.50th=[59507], 99.90th=[62129], 99.95th=[62129], 00:11:02.676 | 99.99th=[65274] 00:11:02.676 bw ( KiB/s): min=14856, max=21488, per=24.04%, avg=18172.00, stdev=4689.53, samples=2 00:11:02.676 iops : min= 3714, max= 5372, avg=4543.00, stdev=1172.38, samples=2 00:11:02.676 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:11:02.676 lat (msec) : 2=1.04%, 4=2.27%, 10=31.49%, 20=44.63%, 50=18.61% 00:11:02.676 lat (msec) : 100=1.92% 00:11:02.676 cpu : usr=2.58%, sys=3.77%, ctx=525, majf=0, minf=1 00:11:02.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:02.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.676 issued rwts: total=4158,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.676 job2: (groupid=0, jobs=1): err= 0: pid=1083336: Tue Dec 10 05:35:50 2024 00:11:02.676 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:11:02.676 slat (nsec): min=1275, max=20848k, avg=127545.75, stdev=1021646.93 00:11:02.676 clat (usec): min=4879, max=56990, avg=16794.87, stdev=9074.11 00:11:02.676 lat (usec): min=4888, max=56998, avg=16922.42, stdev=9163.39 00:11:02.676 clat percentiles (usec): 00:11:02.676 | 1.00th=[ 5604], 5.00th=[ 6783], 10.00th=[ 9241], 20.00th=[11207], 00:11:02.676 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13173], 60.00th=[15795], 00:11:02.676 | 70.00th=[19530], 80.00th=[21627], 90.00th=[25035], 95.00th=[33817], 00:11:02.676 | 99.00th=[52167], 99.50th=[54789], 99.90th=[56886], 99.95th=[56886], 00:11:02.676 | 99.99th=[56886] 00:11:02.676 write: IOPS=3858, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1007msec); 0 zone resets 00:11:02.676 slat (usec): min=2, max=23737, avg=104.10, stdev=688.09 00:11:02.676 clat (usec): min=1169, max=56975, avg=17338.97, stdev=8340.46 00:11:02.676 lat (usec): min=1180, max=56985, avg=17443.06, stdev=8387.19 00:11:02.676 clat percentiles (usec): 00:11:02.676 | 1.00th=[ 2343], 5.00th=[ 4686], 10.00th=[ 7570], 20.00th=[10945], 00:11:02.676 | 30.00th=[12387], 40.00th=[13960], 50.00th=[15926], 60.00th=[20579], 00:11:02.676 | 70.00th=[21365], 80.00th=[22676], 90.00th=[27657], 95.00th=[32113], 00:11:02.676 | 99.00th=[43254], 99.50th=[46400], 99.90th=[56361], 99.95th=[56886], 00:11:02.676 | 99.99th=[56886] 00:11:02.676 bw ( KiB/s): min=13104, max=16960, per=19.89%, avg=15032.00, stdev=2726.60, samples=2 00:11:02.676 iops : min= 3276, max= 4240, avg=3758.00, stdev=681.65, samples=2 00:11:02.676 lat (msec) : 2=0.36%, 4=1.37%, 10=13.59%, 20=49.20%, 50=34.55% 00:11:02.676 lat (msec) : 100=0.94% 00:11:02.676 cpu : usr=2.39%, sys=4.47%, ctx=412, majf=0, minf=1 00:11:02.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:02.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.676 issued rwts: total=3584,3886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.676 job3: (groupid=0, jobs=1): err= 0: pid=1083337: Tue Dec 10 05:35:50 2024 00:11:02.676 read: IOPS=4581, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:11:02.676 slat (nsec): min=1121, max=14520k, avg=108529.24, stdev=787966.97 00:11:02.676 clat (usec): min=1853, max=42831, avg=14268.96, stdev=6004.36 00:11:02.676 lat (usec): min=1863, max=42867, avg=14377.49, stdev=6059.54 00:11:02.676 clat percentiles (usec): 00:11:02.676 | 1.00th=[ 5866], 5.00th=[ 8094], 10.00th=[ 9634], 20.00th=[10552], 00:11:02.676 | 30.00th=[11076], 40.00th=[11338], 50.00th=[12125], 60.00th=[12911], 00:11:02.676 | 70.00th=[14353], 80.00th=[18482], 90.00th=[22414], 95.00th=[29230], 00:11:02.676 | 99.00th=[34341], 99.50th=[37487], 99.90th=[42730], 99.95th=[42730], 00:11:02.676 | 99.99th=[42730] 00:11:02.676 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:11:02.676 slat (nsec): min=1937, max=17885k, avg=97992.08, stdev=617993.93 00:11:02.676 clat (usec): min=2057, max=36805, avg=13346.46, stdev=5479.31 00:11:02.676 lat (usec): min=2072, max=36809, avg=13444.45, stdev=5519.24 00:11:02.676 clat percentiles (usec): 00:11:02.676 | 1.00th=[ 4490], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10421], 00:11:02.676 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:11:02.676 | 70.00th=[12518], 80.00th=[14877], 90.00th=[22938], 95.00th=[24511], 00:11:02.676 | 99.00th=[31327], 99.50th=[32637], 99.90th=[33162], 99.95th=[33162], 00:11:02.676 | 99.99th=[36963] 00:11:02.676 bw ( KiB/s): min=16384, max=20480, per=24.38%, avg=18432.00, stdev=2896.31, samples=2 00:11:02.676 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:11:02.676 lat (msec) : 2=0.08%, 4=0.51%, 10=13.88%, 20=69.93%, 50=15.61% 00:11:02.676 cpu : usr=3.09%, sys=5.18%, ctx=489, majf=0, minf=1 00:11:02.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:02.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.676 issued rwts: total=4600,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.676 00:11:02.676 Run status group 0 (all jobs): 00:11:02.676 READ: bw=69.4MiB/s (72.7MB/s), 13.9MiB/s-21.7MiB/s (14.6MB/s-22.8MB/s), io=70.2MiB (73.6MB), run=1004-1012msec 00:11:02.676 WRITE: bw=73.8MiB/s (77.4MB/s), 15.1MiB/s-23.2MiB/s (15.8MB/s-24.4MB/s), io=74.7MiB (78.3MB), run=1004-1012msec 00:11:02.676 00:11:02.676 Disk stats (read/write): 00:11:02.676 nvme0n1: ios=5170/5295, merge=0/0, ticks=50796/46296, in_queue=97092, util=86.27% 00:11:02.676 nvme0n2: ios=3605/3614, merge=0/0, ticks=36095/32260, in_queue=68355, util=90.04% 00:11:02.676 nvme0n3: ios=2690/3072, merge=0/0, ticks=47635/57573, in_queue=105208, util=94.49% 00:11:02.676 nvme0n4: ios=3767/4096, merge=0/0, ticks=37864/39673, in_queue=77537, util=95.18% 00:11:02.676 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:02.676 [global] 00:11:02.676 thread=1 00:11:02.676 invalidate=1 00:11:02.676 rw=randwrite 00:11:02.676 time_based=1 00:11:02.676 runtime=1 00:11:02.676 ioengine=libaio 00:11:02.676 direct=1 00:11:02.676 bs=4096 00:11:02.676 iodepth=128 00:11:02.676 norandommap=0 00:11:02.676 numjobs=1 00:11:02.676 00:11:02.676 verify_dump=1 00:11:02.676 verify_backlog=512 00:11:02.676 verify_state_save=0 00:11:02.676 do_verify=1 00:11:02.676 verify=crc32c-intel 00:11:02.676 [job0] 00:11:02.676 filename=/dev/nvme0n1 00:11:02.676 [job1] 00:11:02.676 filename=/dev/nvme0n2 00:11:02.676 [job2] 00:11:02.676 filename=/dev/nvme0n3 00:11:02.676 [job3] 00:11:02.676 filename=/dev/nvme0n4 00:11:02.676 Could not set queue depth (nvme0n1) 00:11:02.676 Could not set queue depth (nvme0n2) 00:11:02.676 Could not set queue depth (nvme0n3) 00:11:02.676 Could not set queue depth (nvme0n4) 00:11:02.940 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.940 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.940 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.940 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.940 fio-3.35 00:11:02.940 Starting 4 threads 00:11:04.316 00:11:04.316 job0: (groupid=0, jobs=1): err= 0: pid=1083709: Tue Dec 10 05:35:51 2024 00:11:04.316 read: IOPS=5672, BW=22.2MiB/s (23.2MB/s)(22.3MiB/1008msec) 00:11:04.316 slat (nsec): min=1252, max=16935k, avg=81596.85, stdev=610952.97 00:11:04.316 clat (usec): min=1783, max=34915, avg=10653.32, stdev=3765.59 00:11:04.316 lat (usec): min=1789, max=36202, avg=10734.92, stdev=3813.14 00:11:04.316 clat percentiles (usec): 00:11:04.316 | 1.00th=[ 2999], 5.00th=[ 5407], 10.00th=[ 7439], 20.00th=[ 8979], 00:11:04.316 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:11:04.316 | 70.00th=[10290], 80.00th=[12125], 90.00th=[16057], 95.00th=[17171], 00:11:04.316 | 99.00th=[26346], 99.50th=[26346], 99.90th=[32113], 99.95th=[32113], 00:11:04.316 | 99.99th=[34866] 00:11:04.316 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:11:04.316 slat (nsec): min=1910, max=34602k, avg=78589.41, stdev=637959.75 00:11:04.316 clat (usec): min=1022, max=55433, avg=10892.45, stdev=7876.70 00:11:04.316 lat (usec): min=1030, max=55437, avg=10971.04, stdev=7912.91 00:11:04.316 clat percentiles (usec): 00:11:04.316 | 1.00th=[ 2769], 5.00th=[ 4752], 10.00th=[ 6194], 20.00th=[ 8094], 00:11:04.316 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10028], 00:11:04.316 | 70.00th=[10159], 80.00th=[10290], 90.00th=[12649], 95.00th=[20055], 00:11:04.316 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:11:04.316 | 99.99th=[55313] 00:11:04.316 bw ( KiB/s): min=22592, max=26179, per=32.74%, avg=24385.50, stdev=2536.39, samples=2 00:11:04.316 iops : min= 5648, max= 6544, avg=6096.00, stdev=633.57, samples=2 00:11:04.316 lat (msec) : 2=0.35%, 4=2.06%, 10=57.17%, 20=36.74%, 50=2.96% 00:11:04.316 lat (msec) : 100=0.73% 00:11:04.316 cpu : usr=3.48%, sys=6.55%, ctx=641, majf=0, minf=1 00:11:04.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:04.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.316 issued rwts: total=5718,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.316 job1: (groupid=0, jobs=1): err= 0: pid=1083710: Tue Dec 10 05:35:51 2024 00:11:04.316 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:11:04.316 slat (nsec): min=1323, max=10381k, avg=127743.09, stdev=772107.16 00:11:04.316 clat (usec): min=4618, max=40501, avg=16491.94, stdev=6179.11 00:11:04.316 lat (usec): min=4627, max=44441, avg=16619.69, stdev=6253.29 00:11:04.316 clat percentiles (usec): 00:11:04.316 | 1.00th=[ 4752], 5.00th=[ 7504], 10.00th=[ 9503], 20.00th=[10159], 00:11:04.316 | 30.00th=[12387], 40.00th=[13042], 50.00th=[15795], 60.00th=[18482], 00:11:04.316 | 70.00th=[20055], 80.00th=[21890], 90.00th=[25035], 95.00th=[28181], 00:11:04.316 | 99.00th=[31851], 99.50th=[35390], 99.90th=[37487], 99.95th=[40633], 00:11:04.316 | 99.99th=[40633] 00:11:04.316 write: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(15.9MiB/1006msec); 0 zone resets 00:11:04.316 slat (usec): min=2, max=6565, avg=122.62, stdev=613.63 00:11:04.316 clat (usec): min=502, max=37841, avg=16727.41, stdev=8217.46 00:11:04.316 lat (usec): min=510, max=37851, avg=16850.03, stdev=8271.58 00:11:04.316 clat percentiles (usec): 00:11:04.316 | 1.00th=[ 1663], 5.00th=[ 3523], 10.00th=[ 6521], 20.00th=[ 8717], 00:11:04.316 | 30.00th=[10683], 40.00th=[15139], 50.00th=[17433], 60.00th=[17957], 00:11:04.316 | 70.00th=[20317], 80.00th=[23200], 90.00th=[29230], 95.00th=[31851], 00:11:04.316 | 99.00th=[34866], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:11:04.316 | 99.99th=[38011] 00:11:04.316 bw ( KiB/s): min=14800, max=16664, per=21.12%, avg=15732.00, stdev=1318.05, samples=2 00:11:04.316 iops : min= 3700, max= 4166, avg=3933.00, stdev=329.51, samples=2 00:11:04.316 lat (usec) : 750=0.21%, 1000=0.16% 00:11:04.316 lat (msec) : 2=0.54%, 4=2.52%, 10=17.57%, 20=47.78%, 50=31.23% 00:11:04.316 cpu : usr=3.08%, sys=5.57%, ctx=396, majf=0, minf=1 00:11:04.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:04.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.316 issued rwts: total=3584,4060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.316 job2: (groupid=0, jobs=1): err= 0: pid=1083711: Tue Dec 10 05:35:51 2024 00:11:04.316 read: IOPS=3994, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1008msec) 00:11:04.316 slat (nsec): min=1195, max=28108k, avg=133454.80, stdev=972049.89 00:11:04.316 clat (usec): min=733, max=46020, avg=17073.08, stdev=8007.64 00:11:04.316 lat (usec): min=3746, max=46026, avg=17206.54, stdev=8056.17 00:11:04.316 clat percentiles (usec): 00:11:04.316 | 1.00th=[ 4817], 5.00th=[ 8455], 10.00th=[10945], 20.00th=[11469], 00:11:04.316 | 30.00th=[12256], 40.00th=[12518], 50.00th=[14222], 60.00th=[16712], 00:11:04.316 | 70.00th=[19268], 80.00th=[21890], 90.00th=[26346], 95.00th=[32900], 00:11:04.316 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:11:04.316 | 99.99th=[45876] 00:11:04.316 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:11:04.316 slat (usec): min=2, max=12601, avg=104.80, stdev=674.21 00:11:04.316 clat (usec): min=4024, max=27965, avg=14235.02, stdev=4493.90 00:11:04.316 lat (usec): min=4032, max=27995, avg=14339.83, stdev=4545.79 00:11:04.316 clat percentiles (usec): 00:11:04.316 | 1.00th=[ 4047], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[10945], 00:11:04.316 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12780], 60.00th=[15795], 00:11:04.316 | 70.00th=[17171], 80.00th=[18220], 90.00th=[20579], 95.00th=[21103], 00:11:04.316 | 99.00th=[22414], 99.50th=[23725], 99.90th=[27132], 99.95th=[27132], 00:11:04.316 | 99.99th=[27919] 00:11:04.316 bw ( KiB/s): min=12720, max=20048, per=22.00%, avg=16384.00, stdev=5181.68, samples=2 00:11:04.316 iops : min= 3180, max= 5012, avg=4096.00, stdev=1295.42, samples=2 00:11:04.316 lat (usec) : 750=0.01% 00:11:04.316 lat (msec) : 4=0.41%, 10=11.43%, 20=65.56%, 50=22.59% 00:11:04.316 cpu : usr=2.78%, sys=5.26%, ctx=274, majf=0, minf=1 00:11:04.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:04.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.316 issued rwts: total=4026,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.316 job3: (groupid=0, jobs=1): err= 0: pid=1083712: Tue Dec 10 05:35:51 2024 00:11:04.316 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:11:04.316 slat (nsec): min=1147, max=12192k, avg=104229.95, stdev=668434.16 00:11:04.316 clat (usec): min=3928, max=41266, avg=13932.31, stdev=4784.94 00:11:04.316 lat (usec): min=3956, max=41271, avg=14036.54, stdev=4831.01 00:11:04.316 clat percentiles (usec): 00:11:04.316 | 1.00th=[ 4883], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10945], 00:11:04.316 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[13960], 00:11:04.316 | 70.00th=[15926], 80.00th=[17695], 90.00th=[20317], 95.00th=[21890], 00:11:04.316 | 99.00th=[27132], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:04.316 | 99.99th=[41157] 00:11:04.316 write: IOPS=4437, BW=17.3MiB/s (18.2MB/s)(17.5MiB/1007msec); 0 zone resets 00:11:04.316 slat (nsec): min=1922, max=12876k, avg=111026.29, stdev=756833.06 00:11:04.316 clat (usec): min=1417, max=56648, avg=15810.98, stdev=8869.33 00:11:04.316 lat (usec): min=1429, max=56656, avg=15922.01, stdev=8913.60 00:11:04.316 clat percentiles (usec): 00:11:04.316 | 1.00th=[ 4752], 5.00th=[ 7767], 10.00th=[ 9241], 20.00th=[10552], 00:11:04.316 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[15008], 00:11:04.316 | 70.00th=[16450], 80.00th=[20317], 90.00th=[25035], 95.00th=[34866], 00:11:04.316 | 99.00th=[54789], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:11:04.316 | 99.99th=[56886] 00:11:04.316 bw ( KiB/s): min=14192, max=20536, per=23.31%, avg=17364.00, stdev=4485.89, samples=2 00:11:04.316 iops : min= 3548, max= 5134, avg=4341.00, stdev=1121.47, samples=2 00:11:04.316 lat (msec) : 2=0.02%, 4=0.25%, 10=13.10%, 20=71.22%, 50=14.75% 00:11:04.316 lat (msec) : 100=0.67% 00:11:04.316 cpu : usr=2.78%, sys=4.67%, ctx=393, majf=0, minf=2 00:11:04.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:04.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.316 issued rwts: total=4096,4469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.316 00:11:04.316 Run status group 0 (all jobs): 00:11:04.316 READ: bw=67.5MiB/s (70.8MB/s), 13.9MiB/s-22.2MiB/s (14.6MB/s-23.2MB/s), io=68.1MiB (71.4MB), run=1006-1008msec 00:11:04.316 WRITE: bw=72.7MiB/s (76.3MB/s), 15.8MiB/s-23.8MiB/s (16.5MB/s-25.0MB/s), io=73.3MiB (76.9MB), run=1006-1008msec 00:11:04.316 00:11:04.316 Disk stats (read/write): 00:11:04.316 nvme0n1: ios=5141/5335, merge=0/0, ticks=46185/40475, in_queue=86660, util=96.79% 00:11:04.316 nvme0n2: ios=3023/3072, merge=0/0, ticks=26249/23057, in_queue=49306, util=91.36% 00:11:04.316 nvme0n3: ios=3223/3584, merge=0/0, ticks=26040/19322, in_queue=45362, util=93.97% 00:11:04.316 nvme0n4: ios=3642/3739, merge=0/0, ticks=31987/34058, in_queue=66045, util=98.43% 00:11:04.316 05:35:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:04.316 05:35:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1083932 00:11:04.316 05:35:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:04.317 05:35:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:04.317 [global] 00:11:04.317 thread=1 00:11:04.317 invalidate=1 00:11:04.317 rw=read 00:11:04.317 time_based=1 00:11:04.317 runtime=10 00:11:04.317 ioengine=libaio 00:11:04.317 direct=1 00:11:04.317 bs=4096 00:11:04.317 iodepth=1 00:11:04.317 norandommap=1 00:11:04.317 numjobs=1 00:11:04.317 00:11:04.317 [job0] 00:11:04.317 filename=/dev/nvme0n1 00:11:04.317 [job1] 00:11:04.317 filename=/dev/nvme0n2 00:11:04.317 [job2] 00:11:04.317 filename=/dev/nvme0n3 00:11:04.317 [job3] 00:11:04.317 filename=/dev/nvme0n4 00:11:04.317 Could not set queue depth (nvme0n1) 00:11:04.317 Could not set queue depth (nvme0n2) 00:11:04.317 Could not set queue depth (nvme0n3) 00:11:04.317 Could not set queue depth (nvme0n4) 00:11:04.317 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.317 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.317 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.317 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.317 fio-3.35 00:11:04.317 Starting 4 threads 00:11:07.605 05:35:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:07.605 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:07.605 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=536576, buflen=4096 00:11:07.605 fio: pid=1084078, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:07.605 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=54861824, buflen=4096 00:11:07.605 fio: pid=1084077, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:07.605 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.605 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:07.605 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6791168, buflen=4096 00:11:07.605 fio: pid=1084074, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:07.605 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.605 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:07.864 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=679936, buflen=4096 00:11:07.864 fio: pid=1084076, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:07.864 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.864 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:07.864 00:11:07.864 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1084074: Tue Dec 10 05:35:55 2024 00:11:07.864 read: IOPS=532, BW=2130KiB/s (2181kB/s)(6632KiB/3114msec) 00:11:07.864 slat (usec): min=6, max=11612, avg=20.12, stdev=348.01 00:11:07.864 clat (usec): min=166, max=42046, avg=1843.65, stdev=8045.51 00:11:07.864 lat (usec): min=173, max=42069, avg=1863.78, stdev=8053.41 00:11:07.864 clat percentiles (usec): 00:11:07.864 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 188], 00:11:07.864 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:11:07.864 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 326], 00:11:07.864 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.864 | 99.99th=[42206] 00:11:07.864 bw ( KiB/s): min= 96, max=10615, per=10.10%, avg=1853.17, stdev=4292.41, samples=6 00:11:07.864 iops : min= 24, max= 2653, avg=463.17, stdev=1072.80, samples=6 00:11:07.864 lat (usec) : 250=91.50%, 500=4.40%, 750=0.06% 00:11:07.864 lat (msec) : 50=3.98% 00:11:07.864 cpu : usr=0.13%, sys=0.55%, ctx=1661, majf=0, minf=1 00:11:07.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.864 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.864 issued rwts: total=1659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.864 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1084076: Tue Dec 10 05:35:55 2024 00:11:07.864 read: IOPS=49, BW=198KiB/s (203kB/s)(664KiB/3346msec) 00:11:07.864 slat (usec): min=7, max=6794, avg=84.86, stdev=639.94 00:11:07.864 clat (usec): min=210, max=45037, avg=19936.88, stdev=20376.36 00:11:07.864 lat (usec): min=232, max=47984, avg=20022.11, stdev=20463.77 00:11:07.864 clat percentiles (usec): 00:11:07.864 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 251], 20.00th=[ 318], 00:11:07.864 | 30.00th=[ 347], 40.00th=[ 379], 50.00th=[ 461], 60.00th=[40633], 00:11:07.864 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:07.864 | 99.00th=[41681], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:11:07.864 | 99.99th=[44827] 00:11:07.864 bw ( KiB/s): min= 96, max= 368, per=1.14%, avg=209.83, stdev=125.49, samples=6 00:11:07.864 iops : min= 24, max= 92, avg=52.33, stdev=31.51, samples=6 00:11:07.864 lat (usec) : 250=9.58%, 500=40.72%, 750=0.60% 00:11:07.864 lat (msec) : 2=0.60%, 50=47.90% 00:11:07.864 cpu : usr=0.18%, sys=0.00%, ctx=169, majf=0, minf=2 00:11:07.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.864 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.864 issued rwts: total=167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.864 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1084077: Tue Dec 10 05:35:55 2024 00:11:07.864 read: IOPS=4611, BW=18.0MiB/s (18.9MB/s)(52.3MiB/2905msec) 00:11:07.864 slat (usec): min=6, max=14922, avg=10.51, stdev=145.01 00:11:07.864 clat (usec): min=151, max=3343, avg=203.05, stdev=37.09 00:11:07.864 lat (usec): min=166, max=15214, avg=213.56, stdev=150.80 00:11:07.864 clat percentiles (usec): 00:11:07.864 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:11:07.864 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:11:07.864 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 233], 00:11:07.864 | 99.00th=[ 269], 99.50th=[ 383], 99.90th=[ 408], 99.95th=[ 519], 00:11:07.864 | 99.99th=[ 1303] 00:11:07.864 bw ( KiB/s): min=17912, max=19904, per=100.00%, avg=18699.20, stdev=774.07, samples=5 00:11:07.864 iops : min= 4478, max= 4976, avg=4674.80, stdev=193.52, samples=5 00:11:07.864 lat (usec) : 250=97.97%, 500=1.97%, 750=0.04% 00:11:07.864 lat (msec) : 2=0.01%, 4=0.01% 00:11:07.864 cpu : usr=2.24%, sys=6.99%, ctx=13398, majf=0, minf=2 00:11:07.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.864 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.864 issued rwts: total=13395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.864 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1084078: Tue Dec 10 05:35:55 2024 00:11:07.864 read: IOPS=48, BW=191KiB/s (196kB/s)(524KiB/2741msec) 00:11:07.864 slat (nsec): min=8641, max=32838, avg=16795.58, stdev=7234.87 00:11:07.864 clat (usec): min=396, max=42070, avg=20737.49, stdev=20557.06 00:11:07.864 lat (usec): min=405, max=42094, avg=20754.24, stdev=20563.83 00:11:07.864 clat percentiles (usec): 00:11:07.864 | 1.00th=[ 400], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 412], 00:11:07.864 | 30.00th=[ 416], 40.00th=[ 424], 50.00th=[ 515], 60.00th=[41157], 00:11:07.864 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:07.864 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.864 | 99.99th=[42206] 00:11:07.864 bw ( KiB/s): min= 96, max= 512, per=1.09%, avg=200.00, stdev=180.13, samples=5 00:11:07.864 iops : min= 24, max= 128, avg=50.00, stdev=45.03, samples=5 00:11:07.864 lat (usec) : 500=48.48%, 750=1.52% 00:11:07.864 lat (msec) : 50=49.24% 00:11:07.864 cpu : usr=0.00%, sys=0.15%, ctx=132, majf=0, minf=2 00:11:07.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.864 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.864 issued rwts: total=132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.864 00:11:07.864 Run status group 0 (all jobs): 00:11:07.864 READ: bw=17.9MiB/s (18.8MB/s), 191KiB/s-18.0MiB/s (196kB/s-18.9MB/s), io=60.0MiB (62.9MB), run=2741-3346msec 00:11:07.864 00:11:07.864 Disk stats (read/write): 00:11:07.864 nvme0n1: ios=1659/0, merge=0/0, ticks=3062/0, in_queue=3062, util=95.10% 00:11:07.864 nvme0n2: ios=160/0, merge=0/0, ticks=3065/0, in_queue=3065, util=96.07% 00:11:07.864 nvme0n3: ios=13271/0, merge=0/0, ticks=2562/0, in_queue=2562, util=95.84% 00:11:07.864 nvme0n4: ios=128/0, merge=0/0, ticks=2593/0, in_queue=2593, util=96.48% 00:11:08.123 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.123 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:08.381 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.382 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:08.640 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.640 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:08.640 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.640 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:08.899 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:08.899 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1083932 00:11:08.899 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:08.900 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.159 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.159 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:09.159 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:09.159 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.159 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:09.159 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.159 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:09.159 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:09.159 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:09.159 nvmf hotplug test: fio failed as expected 00:11:09.159 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:09.418 rmmod nvme_tcp 00:11:09.418 rmmod nvme_fabrics 00:11:09.418 rmmod nvme_keyring 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1081063 ']' 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1081063 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1081063 ']' 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1081063 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1081063 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1081063' 00:11:09.418 killing process with pid 1081063 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1081063 00:11:09.418 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1081063 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.678 05:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.583 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.583 00:11:11.583 real 0m27.491s 00:11:11.583 user 1m49.780s 00:11:11.583 sys 0m8.634s 00:11:11.583 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.583 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.583 ************************************ 00:11:11.583 END TEST nvmf_fio_target 00:11:11.583 ************************************ 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:11.842 ************************************ 00:11:11.842 START TEST nvmf_bdevio 00:11:11.842 ************************************ 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:11.842 * Looking for test storage... 00:11:11.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.842 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:11.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.843 --rc genhtml_branch_coverage=1 00:11:11.843 --rc genhtml_function_coverage=1 00:11:11.843 --rc genhtml_legend=1 00:11:11.843 --rc geninfo_all_blocks=1 00:11:11.843 --rc geninfo_unexecuted_blocks=1 00:11:11.843 00:11:11.843 ' 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:11.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.843 --rc genhtml_branch_coverage=1 00:11:11.843 --rc genhtml_function_coverage=1 00:11:11.843 --rc genhtml_legend=1 00:11:11.843 --rc geninfo_all_blocks=1 00:11:11.843 --rc geninfo_unexecuted_blocks=1 00:11:11.843 00:11:11.843 ' 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:11.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.843 --rc genhtml_branch_coverage=1 00:11:11.843 --rc genhtml_function_coverage=1 00:11:11.843 --rc genhtml_legend=1 00:11:11.843 --rc geninfo_all_blocks=1 00:11:11.843 --rc geninfo_unexecuted_blocks=1 00:11:11.843 00:11:11.843 ' 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:11.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.843 --rc genhtml_branch_coverage=1 00:11:11.843 --rc genhtml_function_coverage=1 00:11:11.843 --rc genhtml_legend=1 00:11:11.843 --rc geninfo_all_blocks=1 00:11:11.843 --rc geninfo_unexecuted_blocks=1 00:11:11.843 00:11:11.843 ' 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.843 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.102 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:18.672 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:18.672 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:18.672 Found net devices under 0000:af:00.0: cvl_0_0 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:18.672 Found net devices under 0000:af:00.1: cvl_0_1 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.672 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:11:18.673 00:11:18.673 --- 10.0.0.2 ping statistics --- 00:11:18.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.673 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:11:18.673 00:11:18.673 --- 10.0.0.1 ping statistics --- 00:11:18.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.673 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1088579 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1088579 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1088579 ']' 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.673 [2024-12-10 05:36:05.762761] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:11:18.673 [2024-12-10 05:36:05.762813] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.673 [2024-12-10 05:36:05.841980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.673 [2024-12-10 05:36:05.883022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.673 [2024-12-10 05:36:05.883058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.673 [2024-12-10 05:36:05.883064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.673 [2024-12-10 05:36:05.883074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.673 [2024-12-10 05:36:05.883079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.673 [2024-12-10 05:36:05.884462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:18.673 [2024-12-10 05:36:05.884568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:18.673 [2024-12-10 05:36:05.884674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.673 [2024-12-10 05:36:05.884676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.673 05:36:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.673 [2024-12-10 05:36:06.021623] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.673 Malloc0 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.673 [2024-12-10 05:36:06.086169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:18.673 { 00:11:18.673 "params": { 00:11:18.673 "name": "Nvme$subsystem", 00:11:18.673 "trtype": "$TEST_TRANSPORT", 00:11:18.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:18.673 "adrfam": "ipv4", 00:11:18.673 "trsvcid": "$NVMF_PORT", 00:11:18.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:18.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:18.673 "hdgst": ${hdgst:-false}, 00:11:18.673 "ddgst": ${ddgst:-false} 00:11:18.673 }, 00:11:18.673 "method": "bdev_nvme_attach_controller" 00:11:18.673 } 00:11:18.673 EOF 00:11:18.673 )") 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:18.673 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:18.673 "params": { 00:11:18.673 "name": "Nvme1", 00:11:18.673 "trtype": "tcp", 00:11:18.673 "traddr": "10.0.0.2", 00:11:18.673 "adrfam": "ipv4", 00:11:18.673 "trsvcid": "4420", 00:11:18.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:18.673 "hdgst": false, 00:11:18.673 "ddgst": false 00:11:18.673 }, 00:11:18.673 "method": "bdev_nvme_attach_controller" 00:11:18.673 }' 00:11:18.673 [2024-12-10 05:36:06.137817] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:11:18.673 [2024-12-10 05:36:06.137859] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088609 ] 00:11:18.673 [2024-12-10 05:36:06.212545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:18.673 [2024-12-10 05:36:06.254825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.673 [2024-12-10 05:36:06.254933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.673 [2024-12-10 05:36:06.254933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.673 I/O targets: 00:11:18.673 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:18.673 00:11:18.673 00:11:18.673 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.673 http://cunit.sourceforge.net/ 00:11:18.673 00:11:18.673 00:11:18.673 Suite: bdevio tests on: Nvme1n1 00:11:18.673 Test: blockdev write read block ...passed 00:11:18.673 Test: blockdev write zeroes read block ...passed 00:11:18.673 Test: blockdev write zeroes read no split ...passed 00:11:18.673 Test: blockdev write zeroes read split ...passed 00:11:18.932 Test: blockdev write zeroes read split partial ...passed 00:11:18.932 Test: blockdev reset ...[2024-12-10 05:36:06.569628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:18.932 [2024-12-10 05:36:06.569687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1813610 (9): Bad file descriptor 00:11:18.932 [2024-12-10 05:36:06.704106] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:18.932 passed 00:11:18.932 Test: blockdev write read 8 blocks ...passed 00:11:18.932 Test: blockdev write read size > 128k ...passed 00:11:18.932 Test: blockdev write read invalid size ...passed 00:11:18.932 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.932 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.932 Test: blockdev write read max offset ...passed 00:11:19.191 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.191 Test: blockdev writev readv 8 blocks ...passed 00:11:19.191 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.191 Test: blockdev writev readv block ...passed 00:11:19.191 Test: blockdev writev readv size > 128k ...passed 00:11:19.191 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.191 Test: blockdev comparev and writev ...[2024-12-10 05:36:06.915887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.191 [2024-12-10 05:36:06.915914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:19.191 [2024-12-10 05:36:06.915928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.191 [2024-12-10 05:36:06.915940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:19.191 [2024-12-10 05:36:06.916205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.191 [2024-12-10 05:36:06.916215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:19.191 [2024-12-10 05:36:06.916226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.191 [2024-12-10 05:36:06.916233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:19.191 [2024-12-10 05:36:06.916450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.192 [2024-12-10 05:36:06.916460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:19.192 [2024-12-10 05:36:06.916471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.192 [2024-12-10 05:36:06.916478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:19.192 [2024-12-10 05:36:06.916721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.192 [2024-12-10 05:36:06.916730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:19.192 [2024-12-10 05:36:06.916742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.192 [2024-12-10 05:36:06.916748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:19.192 passed 00:11:19.192 Test: blockdev nvme passthru rw ...passed 00:11:19.192 Test: blockdev nvme passthru vendor specific ...[2024-12-10 05:36:07.000460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:19.192 [2024-12-10 05:36:07.000475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:19.192 [2024-12-10 05:36:07.000578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:19.192 [2024-12-10 05:36:07.000588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:19.192 [2024-12-10 05:36:07.000692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:19.192 [2024-12-10 05:36:07.000701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:19.192 [2024-12-10 05:36:07.000806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:19.192 [2024-12-10 05:36:07.000814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:19.192 passed 00:11:19.192 Test: blockdev nvme admin passthru ...passed 00:11:19.192 Test: blockdev copy ...passed 00:11:19.192 00:11:19.192 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.192 suites 1 1 n/a 0 0 00:11:19.192 tests 23 23 23 0 0 00:11:19.192 asserts 152 152 152 0 n/a 00:11:19.192 00:11:19.192 Elapsed time = 1.222 seconds 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.451 rmmod nvme_tcp 00:11:19.451 rmmod nvme_fabrics 00:11:19.451 rmmod nvme_keyring 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1088579 ']' 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1088579 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1088579 ']' 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1088579 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1088579 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1088579' 00:11:19.451 killing process with pid 1088579 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1088579 00:11:19.451 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1088579 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.711 05:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.246 05:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.246 00:11:22.246 real 0m10.053s 00:11:22.246 user 0m10.371s 00:11:22.246 sys 0m4.986s 00:11:22.246 05:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.246 05:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.246 ************************************ 00:11:22.246 END TEST nvmf_bdevio 00:11:22.246 ************************************ 00:11:22.246 05:36:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:22.246 00:11:22.246 real 4m35.187s 00:11:22.246 user 10m29.503s 00:11:22.246 sys 1m38.637s 00:11:22.246 05:36:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.246 05:36:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:22.246 ************************************ 00:11:22.246 END TEST nvmf_target_core 00:11:22.246 ************************************ 00:11:22.246 05:36:09 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:22.246 05:36:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.247 05:36:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.247 05:36:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:22.247 ************************************ 00:11:22.247 START TEST nvmf_target_extra 00:11:22.247 ************************************ 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:22.247 * Looking for test storage... 00:11:22.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.247 --rc genhtml_branch_coverage=1 00:11:22.247 --rc genhtml_function_coverage=1 00:11:22.247 --rc genhtml_legend=1 00:11:22.247 --rc geninfo_all_blocks=1 00:11:22.247 --rc geninfo_unexecuted_blocks=1 00:11:22.247 00:11:22.247 ' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.247 --rc genhtml_branch_coverage=1 00:11:22.247 --rc genhtml_function_coverage=1 00:11:22.247 --rc genhtml_legend=1 00:11:22.247 --rc geninfo_all_blocks=1 00:11:22.247 --rc geninfo_unexecuted_blocks=1 00:11:22.247 00:11:22.247 ' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.247 --rc genhtml_branch_coverage=1 00:11:22.247 --rc genhtml_function_coverage=1 00:11:22.247 --rc genhtml_legend=1 00:11:22.247 --rc geninfo_all_blocks=1 00:11:22.247 --rc geninfo_unexecuted_blocks=1 00:11:22.247 00:11:22.247 ' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.247 --rc genhtml_branch_coverage=1 00:11:22.247 --rc genhtml_function_coverage=1 00:11:22.247 --rc genhtml_legend=1 00:11:22.247 --rc geninfo_all_blocks=1 00:11:22.247 --rc geninfo_unexecuted_blocks=1 00:11:22.247 00:11:22.247 ' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.247 ************************************ 00:11:22.247 START TEST nvmf_example 00:11:22.247 ************************************ 00:11:22.247 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:22.247 * Looking for test storage... 00:11:22.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.247 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.247 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.248 --rc genhtml_branch_coverage=1 00:11:22.248 --rc genhtml_function_coverage=1 00:11:22.248 --rc genhtml_legend=1 00:11:22.248 --rc geninfo_all_blocks=1 00:11:22.248 --rc geninfo_unexecuted_blocks=1 00:11:22.248 00:11:22.248 ' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.248 --rc genhtml_branch_coverage=1 00:11:22.248 --rc genhtml_function_coverage=1 00:11:22.248 --rc genhtml_legend=1 00:11:22.248 --rc geninfo_all_blocks=1 00:11:22.248 --rc geninfo_unexecuted_blocks=1 00:11:22.248 00:11:22.248 ' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.248 --rc genhtml_branch_coverage=1 00:11:22.248 --rc genhtml_function_coverage=1 00:11:22.248 --rc genhtml_legend=1 00:11:22.248 --rc geninfo_all_blocks=1 00:11:22.248 --rc geninfo_unexecuted_blocks=1 00:11:22.248 00:11:22.248 ' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.248 --rc genhtml_branch_coverage=1 00:11:22.248 --rc genhtml_function_coverage=1 00:11:22.248 --rc genhtml_legend=1 00:11:22.248 --rc geninfo_all_blocks=1 00:11:22.248 --rc geninfo_unexecuted_blocks=1 00:11:22.248 00:11:22.248 ' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:22.248 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.508 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.088 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.088 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:29.088 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:29.089 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:29.089 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:29.089 Found net devices under 0000:af:00.0: cvl_0_0 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:29.089 Found net devices under 0000:af:00.1: cvl_0_1 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:11:29.089 00:11:29.089 --- 10.0.0.2 ping statistics --- 00:11:29.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.089 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:11:29.089 00:11:29.089 --- 10.0.0.1 ping statistics --- 00:11:29.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.089 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.089 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.090 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.090 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.090 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.090 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1092755 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1092755 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1092755 ']' 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.090 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:29.349 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.349 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:29.349 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:41.561 Initializing NVMe Controllers 00:11:41.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:41.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:41.561 Initialization complete. Launching workers. 00:11:41.561 ======================================================== 00:11:41.561 Latency(us) 00:11:41.561 Device Information : IOPS MiB/s Average min max 00:11:41.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18355.70 71.70 3486.10 504.29 16333.13 00:11:41.561 ======================================================== 00:11:41.561 Total : 18355.70 71.70 3486.10 504.29 16333.13 00:11:41.561 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.561 rmmod nvme_tcp 00:11:41.561 rmmod nvme_fabrics 00:11:41.561 rmmod nvme_keyring 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1092755 ']' 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1092755 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1092755 ']' 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1092755 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092755 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092755' 00:11:41.561 killing process with pid 1092755 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1092755 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1092755 00:11:41.561 nvmf threads initialize successfully 00:11:41.561 bdev subsystem init successfully 00:11:41.561 created a nvmf target service 00:11:41.561 create targets's poll groups done 00:11:41.561 all subsystems of target started 00:11:41.561 nvmf target is running 00:11:41.561 all subsystems of target stopped 00:11:41.561 destroy targets's poll groups done 00:11:41.561 destroyed the nvmf target service 00:11:41.561 bdev subsystem finish successfully 00:11:41.561 nvmf threads destroy successfully 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.561 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.821 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:41.821 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:41.821 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.821 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.821 00:11:41.821 real 0m19.744s 00:11:41.821 user 0m46.111s 00:11:41.821 sys 0m5.963s 00:11:41.821 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.821 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.821 ************************************ 00:11:41.821 END TEST nvmf_example 00:11:41.821 ************************************ 00:11:41.821 05:36:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:41.821 05:36:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:41.821 05:36:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.821 05:36:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.083 ************************************ 00:11:42.083 START TEST nvmf_filesystem 00:11:42.083 ************************************ 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:42.083 * Looking for test storage... 00:11:42.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.083 --rc genhtml_branch_coverage=1 00:11:42.083 --rc genhtml_function_coverage=1 00:11:42.083 --rc genhtml_legend=1 00:11:42.083 --rc geninfo_all_blocks=1 00:11:42.083 --rc geninfo_unexecuted_blocks=1 00:11:42.083 00:11:42.083 ' 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.083 --rc genhtml_branch_coverage=1 00:11:42.083 --rc genhtml_function_coverage=1 00:11:42.083 --rc genhtml_legend=1 00:11:42.083 --rc geninfo_all_blocks=1 00:11:42.083 --rc geninfo_unexecuted_blocks=1 00:11:42.083 00:11:42.083 ' 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.083 --rc genhtml_branch_coverage=1 00:11:42.083 --rc genhtml_function_coverage=1 00:11:42.083 --rc genhtml_legend=1 00:11:42.083 --rc geninfo_all_blocks=1 00:11:42.083 --rc geninfo_unexecuted_blocks=1 00:11:42.083 00:11:42.083 ' 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.083 --rc genhtml_branch_coverage=1 00:11:42.083 --rc genhtml_function_coverage=1 00:11:42.083 --rc genhtml_legend=1 00:11:42.083 --rc geninfo_all_blocks=1 00:11:42.083 --rc geninfo_unexecuted_blocks=1 00:11:42.083 00:11:42.083 ' 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:42.083 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:42.084 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:42.085 #define SPDK_CONFIG_H 00:11:42.085 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:42.085 #define SPDK_CONFIG_APPS 1 00:11:42.085 #define SPDK_CONFIG_ARCH native 00:11:42.085 #undef SPDK_CONFIG_ASAN 00:11:42.085 #undef SPDK_CONFIG_AVAHI 00:11:42.085 #undef SPDK_CONFIG_CET 00:11:42.085 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:42.085 #define SPDK_CONFIG_COVERAGE 1 00:11:42.085 #define SPDK_CONFIG_CROSS_PREFIX 00:11:42.085 #undef SPDK_CONFIG_CRYPTO 00:11:42.085 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:42.085 #undef SPDK_CONFIG_CUSTOMOCF 00:11:42.085 #undef SPDK_CONFIG_DAOS 00:11:42.085 #define SPDK_CONFIG_DAOS_DIR 00:11:42.085 #define SPDK_CONFIG_DEBUG 1 00:11:42.085 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:42.085 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:42.085 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:42.085 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:42.085 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:42.085 #undef SPDK_CONFIG_DPDK_UADK 00:11:42.085 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:42.085 #define SPDK_CONFIG_EXAMPLES 1 00:11:42.085 #undef SPDK_CONFIG_FC 00:11:42.085 #define SPDK_CONFIG_FC_PATH 00:11:42.085 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:42.085 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:42.085 #define SPDK_CONFIG_FSDEV 1 00:11:42.085 #undef SPDK_CONFIG_FUSE 00:11:42.085 #undef SPDK_CONFIG_FUZZER 00:11:42.085 #define SPDK_CONFIG_FUZZER_LIB 00:11:42.085 #undef SPDK_CONFIG_GOLANG 00:11:42.085 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:42.085 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:42.085 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:42.085 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:42.085 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:42.085 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:42.085 #undef SPDK_CONFIG_HAVE_LZ4 00:11:42.085 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:42.085 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:42.085 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:42.085 #define SPDK_CONFIG_IDXD 1 00:11:42.085 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:42.085 #undef SPDK_CONFIG_IPSEC_MB 00:11:42.085 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:42.085 #define SPDK_CONFIG_ISAL 1 00:11:42.085 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:42.085 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:42.085 #define SPDK_CONFIG_LIBDIR 00:11:42.085 #undef SPDK_CONFIG_LTO 00:11:42.085 #define SPDK_CONFIG_MAX_LCORES 128 00:11:42.085 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:42.085 #define SPDK_CONFIG_NVME_CUSE 1 00:11:42.085 #undef SPDK_CONFIG_OCF 00:11:42.085 #define SPDK_CONFIG_OCF_PATH 00:11:42.085 #define SPDK_CONFIG_OPENSSL_PATH 00:11:42.085 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:42.085 #define SPDK_CONFIG_PGO_DIR 00:11:42.085 #undef SPDK_CONFIG_PGO_USE 00:11:42.085 #define SPDK_CONFIG_PREFIX /usr/local 00:11:42.085 #undef SPDK_CONFIG_RAID5F 00:11:42.085 #undef SPDK_CONFIG_RBD 00:11:42.085 #define SPDK_CONFIG_RDMA 1 00:11:42.085 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:42.085 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:42.085 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:42.085 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:42.085 #define SPDK_CONFIG_SHARED 1 00:11:42.085 #undef SPDK_CONFIG_SMA 00:11:42.085 #define SPDK_CONFIG_TESTS 1 00:11:42.085 #undef SPDK_CONFIG_TSAN 00:11:42.085 #define SPDK_CONFIG_UBLK 1 00:11:42.085 #define SPDK_CONFIG_UBSAN 1 00:11:42.085 #undef SPDK_CONFIG_UNIT_TESTS 00:11:42.085 #undef SPDK_CONFIG_URING 00:11:42.085 #define SPDK_CONFIG_URING_PATH 00:11:42.085 #undef SPDK_CONFIG_URING_ZNS 00:11:42.085 #undef SPDK_CONFIG_USDT 00:11:42.085 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:42.085 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:42.085 #define SPDK_CONFIG_VFIO_USER 1 00:11:42.085 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:42.085 #define SPDK_CONFIG_VHOST 1 00:11:42.085 #define SPDK_CONFIG_VIRTIO 1 00:11:42.085 #undef SPDK_CONFIG_VTUNE 00:11:42.085 #define SPDK_CONFIG_VTUNE_DIR 00:11:42.085 #define SPDK_CONFIG_WERROR 1 00:11:42.085 #define SPDK_CONFIG_WPDK_DIR 00:11:42.085 #undef SPDK_CONFIG_XNVME 00:11:42.085 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.085 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.086 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.086 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.086 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:42.086 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.086 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:42.086 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:42.086 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:42.086 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:42.086 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:42.351 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:42.352 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:42.352 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:42.352 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:42.352 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:42.352 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:42.352 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:42.353 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1095117 ]] 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1095117 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.yUiFdp 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.yUiFdp/tests/target /tmp/spdk.yUiFdp 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88853069824 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837203968 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11984134144 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50407235584 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144435200 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23007232 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=49344368640 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074233344 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:42.354 * Looking for test storage... 00:11:42.354 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88853069824 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=14198726656 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.355 --rc genhtml_branch_coverage=1 00:11:42.355 --rc genhtml_function_coverage=1 00:11:42.355 --rc genhtml_legend=1 00:11:42.355 --rc geninfo_all_blocks=1 00:11:42.355 --rc geninfo_unexecuted_blocks=1 00:11:42.355 00:11:42.355 ' 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.355 --rc genhtml_branch_coverage=1 00:11:42.355 --rc genhtml_function_coverage=1 00:11:42.355 --rc genhtml_legend=1 00:11:42.355 --rc geninfo_all_blocks=1 00:11:42.355 --rc geninfo_unexecuted_blocks=1 00:11:42.355 00:11:42.355 ' 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.355 --rc genhtml_branch_coverage=1 00:11:42.355 --rc genhtml_function_coverage=1 00:11:42.355 --rc genhtml_legend=1 00:11:42.355 --rc geninfo_all_blocks=1 00:11:42.355 --rc geninfo_unexecuted_blocks=1 00:11:42.355 00:11:42.355 ' 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.355 --rc genhtml_branch_coverage=1 00:11:42.355 --rc genhtml_function_coverage=1 00:11:42.355 --rc genhtml_legend=1 00:11:42.355 --rc geninfo_all_blocks=1 00:11:42.355 --rc geninfo_unexecuted_blocks=1 00:11:42.355 00:11:42.355 ' 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.355 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.356 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:49.086 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:49.086 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:49.086 Found net devices under 0000:af:00.0: cvl_0_0 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:49.086 Found net devices under 0000:af:00.1: cvl_0_1 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.086 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.086 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.086 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.086 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:49.086 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.086 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.086 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.086 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:49.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:11:49.087 00:11:49.087 --- 10.0.0.2 ping statistics --- 00:11:49.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.087 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:11:49.087 00:11:49.087 --- 10.0.0.1 ping statistics --- 00:11:49.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.087 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:49.087 ************************************ 00:11:49.087 START TEST nvmf_filesystem_no_in_capsule 00:11:49.087 ************************************ 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1098297 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1098297 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1098297 ']' 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.087 [2024-12-10 05:36:36.258519] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:11:49.087 [2024-12-10 05:36:36.258569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.087 [2024-12-10 05:36:36.336741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.087 [2024-12-10 05:36:36.378449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.087 [2024-12-10 05:36:36.378483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.087 [2024-12-10 05:36:36.378490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.087 [2024-12-10 05:36:36.378496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.087 [2024-12-10 05:36:36.378501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.087 [2024-12-10 05:36:36.379936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.087 [2024-12-10 05:36:36.380045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.087 [2024-12-10 05:36:36.380155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.087 [2024-12-10 05:36:36.380157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.087 [2024-12-10 05:36:36.512888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.087 Malloc1 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.087 [2024-12-10 05:36:36.678351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.087 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:49.087 { 00:11:49.087 "name": "Malloc1", 00:11:49.087 "aliases": [ 00:11:49.087 "904f2c0b-491a-4c79-94b2-cc02ce44c978" 00:11:49.087 ], 00:11:49.087 "product_name": "Malloc disk", 00:11:49.087 "block_size": 512, 00:11:49.087 "num_blocks": 1048576, 00:11:49.087 "uuid": "904f2c0b-491a-4c79-94b2-cc02ce44c978", 00:11:49.087 "assigned_rate_limits": { 00:11:49.087 "rw_ios_per_sec": 0, 00:11:49.087 "rw_mbytes_per_sec": 0, 00:11:49.087 "r_mbytes_per_sec": 0, 00:11:49.087 "w_mbytes_per_sec": 0 00:11:49.087 }, 00:11:49.087 "claimed": true, 00:11:49.087 "claim_type": "exclusive_write", 00:11:49.087 "zoned": false, 00:11:49.087 "supported_io_types": { 00:11:49.088 "read": true, 00:11:49.088 "write": true, 00:11:49.088 "unmap": true, 00:11:49.088 "flush": true, 00:11:49.088 "reset": true, 00:11:49.088 "nvme_admin": false, 00:11:49.088 "nvme_io": false, 00:11:49.088 "nvme_io_md": false, 00:11:49.088 "write_zeroes": true, 00:11:49.088 "zcopy": true, 00:11:49.088 "get_zone_info": false, 00:11:49.088 "zone_management": false, 00:11:49.088 "zone_append": false, 00:11:49.088 "compare": false, 00:11:49.088 "compare_and_write": false, 00:11:49.088 "abort": true, 00:11:49.088 "seek_hole": false, 00:11:49.088 "seek_data": false, 00:11:49.088 "copy": true, 00:11:49.088 "nvme_iov_md": false 00:11:49.088 }, 00:11:49.088 "memory_domains": [ 00:11:49.088 { 00:11:49.088 "dma_device_id": "system", 00:11:49.088 "dma_device_type": 1 00:11:49.088 }, 00:11:49.088 { 00:11:49.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.088 "dma_device_type": 2 00:11:49.088 } 00:11:49.088 ], 00:11:49.088 "driver_specific": {} 00:11:49.088 } 00:11:49.088 ]' 00:11:49.088 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:49.088 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:49.088 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:49.088 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:49.088 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:49.088 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:49.088 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:49.088 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.466 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.466 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.466 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.466 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.466 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:52.371 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:52.371 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:52.371 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:52.371 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:52.630 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:53.197 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.133 ************************************ 00:11:54.133 START TEST filesystem_ext4 00:11:54.133 ************************************ 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:54.133 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:54.133 mke2fs 1.47.0 (5-Feb-2023) 00:11:54.392 Discarding device blocks: 0/522240 done 00:11:54.392 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:54.392 Filesystem UUID: 446dc132-d9e8-44d1-aad8-dc7847cefe96 00:11:54.392 Superblock backups stored on blocks: 00:11:54.392 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:54.392 00:11:54.392 Allocating group tables: 0/64 done 00:11:54.392 Writing inode tables: 0/64 done 00:11:54.392 Creating journal (8192 blocks): done 00:11:54.392 Writing superblocks and filesystem accounting information: 0/64 done 00:11:54.392 00:11:54.392 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:54.392 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1098297 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:00.956 00:12:00.956 real 0m5.767s 00:12:00.956 user 0m0.016s 00:12:00.956 sys 0m0.080s 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:00.956 ************************************ 00:12:00.956 END TEST filesystem_ext4 00:12:00.956 ************************************ 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.956 ************************************ 00:12:00.956 START TEST filesystem_btrfs 00:12:00.956 ************************************ 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:00.956 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:00.956 btrfs-progs v6.8.1 00:12:00.956 See https://btrfs.readthedocs.io for more information. 00:12:00.956 00:12:00.956 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:00.956 NOTE: several default settings have changed in version 5.15, please make sure 00:12:00.956 this does not affect your deployments: 00:12:00.956 - DUP for metadata (-m dup) 00:12:00.956 - enabled no-holes (-O no-holes) 00:12:00.956 - enabled free-space-tree (-R free-space-tree) 00:12:00.956 00:12:00.956 Label: (null) 00:12:00.956 UUID: bb79d500-dce6-4cf6-8a95-3508f8c8cc4c 00:12:00.956 Node size: 16384 00:12:00.956 Sector size: 4096 (CPU page size: 4096) 00:12:00.956 Filesystem size: 510.00MiB 00:12:00.956 Block group profiles: 00:12:00.956 Data: single 8.00MiB 00:12:00.956 Metadata: DUP 32.00MiB 00:12:00.956 System: DUP 8.00MiB 00:12:00.956 SSD detected: yes 00:12:00.956 Zoned device: no 00:12:00.956 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:00.956 Checksum: crc32c 00:12:00.956 Number of devices: 1 00:12:00.956 Devices: 00:12:00.956 ID SIZE PATH 00:12:00.956 1 510.00MiB /dev/nvme0n1p1 00:12:00.956 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1098297 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:00.956 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:00.957 00:12:00.957 real 0m0.580s 00:12:00.957 user 0m0.022s 00:12:00.957 sys 0m0.119s 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:00.957 ************************************ 00:12:00.957 END TEST filesystem_btrfs 00:12:00.957 ************************************ 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.957 ************************************ 00:12:00.957 START TEST filesystem_xfs 00:12:00.957 ************************************ 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:00.957 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:00.957 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:00.957 = sectsz=512 attr=2, projid32bit=1 00:12:00.957 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:00.957 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:00.957 data = bsize=4096 blocks=130560, imaxpct=25 00:12:00.957 = sunit=0 swidth=0 blks 00:12:00.957 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:00.957 log =internal log bsize=4096 blocks=16384, version=2 00:12:00.957 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:00.957 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:01.893 Discarding blocks...Done. 00:12:01.893 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:01.893 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:04.425 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1098297 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:04.425 00:12:04.425 real 0m3.580s 00:12:04.425 user 0m0.027s 00:12:04.425 sys 0m0.070s 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:04.425 ************************************ 00:12:04.425 END TEST filesystem_xfs 00:12:04.425 ************************************ 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1098297 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1098297 ']' 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1098297 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1098297 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1098297' 00:12:04.425 killing process with pid 1098297 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1098297 00:12:04.425 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1098297 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:04.994 00:12:04.994 real 0m16.385s 00:12:04.994 user 1m4.411s 00:12:04.994 sys 0m1.402s 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.994 ************************************ 00:12:04.994 END TEST nvmf_filesystem_no_in_capsule 00:12:04.994 ************************************ 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:04.994 ************************************ 00:12:04.994 START TEST nvmf_filesystem_in_capsule 00:12:04.994 ************************************ 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1101213 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1101213 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1101213 ']' 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.994 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.994 [2024-12-10 05:36:52.717563] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:12:04.994 [2024-12-10 05:36:52.717608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.994 [2024-12-10 05:36:52.796405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.994 [2024-12-10 05:36:52.833055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.994 [2024-12-10 05:36:52.833094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.994 [2024-12-10 05:36:52.833101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.994 [2024-12-10 05:36:52.833107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.994 [2024-12-10 05:36:52.833112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.994 [2024-12-10 05:36:52.834584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.994 [2024-12-10 05:36:52.834691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.994 [2024-12-10 05:36:52.834777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.994 [2024-12-10 05:36:52.834777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.929 [2024-12-10 05:36:53.602321] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.929 Malloc1 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.929 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.930 [2024-12-10 05:36:53.760348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:05.930 { 00:12:05.930 "name": "Malloc1", 00:12:05.930 "aliases": [ 00:12:05.930 "d9c0283f-c458-458e-97e2-339bcd5be2b0" 00:12:05.930 ], 00:12:05.930 "product_name": "Malloc disk", 00:12:05.930 "block_size": 512, 00:12:05.930 "num_blocks": 1048576, 00:12:05.930 "uuid": "d9c0283f-c458-458e-97e2-339bcd5be2b0", 00:12:05.930 "assigned_rate_limits": { 00:12:05.930 "rw_ios_per_sec": 0, 00:12:05.930 "rw_mbytes_per_sec": 0, 00:12:05.930 "r_mbytes_per_sec": 0, 00:12:05.930 "w_mbytes_per_sec": 0 00:12:05.930 }, 00:12:05.930 "claimed": true, 00:12:05.930 "claim_type": "exclusive_write", 00:12:05.930 "zoned": false, 00:12:05.930 "supported_io_types": { 00:12:05.930 "read": true, 00:12:05.930 "write": true, 00:12:05.930 "unmap": true, 00:12:05.930 "flush": true, 00:12:05.930 "reset": true, 00:12:05.930 "nvme_admin": false, 00:12:05.930 "nvme_io": false, 00:12:05.930 "nvme_io_md": false, 00:12:05.930 "write_zeroes": true, 00:12:05.930 "zcopy": true, 00:12:05.930 "get_zone_info": false, 00:12:05.930 "zone_management": false, 00:12:05.930 "zone_append": false, 00:12:05.930 "compare": false, 00:12:05.930 "compare_and_write": false, 00:12:05.930 "abort": true, 00:12:05.930 "seek_hole": false, 00:12:05.930 "seek_data": false, 00:12:05.930 "copy": true, 00:12:05.930 "nvme_iov_md": false 00:12:05.930 }, 00:12:05.930 "memory_domains": [ 00:12:05.930 { 00:12:05.930 "dma_device_id": "system", 00:12:05.930 "dma_device_type": 1 00:12:05.930 }, 00:12:05.930 { 00:12:05.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.930 "dma_device_type": 2 00:12:05.930 } 00:12:05.930 ], 00:12:05.930 "driver_specific": {} 00:12:05.930 } 00:12:05.930 ]' 00:12:05.930 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:06.189 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:06.189 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:06.189 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:06.189 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:06.189 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:06.189 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:06.189 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.565 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.565 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:07.565 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.565 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:07.565 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:09.468 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.843 ************************************ 00:12:10.843 START TEST filesystem_in_capsule_ext4 00:12:10.843 ************************************ 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:10.843 mke2fs 1.47.0 (5-Feb-2023) 00:12:10.843 Discarding device blocks: 0/522240 done 00:12:10.843 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:10.843 Filesystem UUID: 0f9ec050-b88d-421e-933f-ea4f848a7808 00:12:10.843 Superblock backups stored on blocks: 00:12:10.843 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:10.843 00:12:10.843 Allocating group tables: 0/64 done 00:12:10.843 Writing inode tables: 0/64 done 00:12:10.843 Creating journal (8192 blocks): done 00:12:10.843 Writing superblocks and filesystem accounting information: 0/64 done 00:12:10.843 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:10.843 05:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.111 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1101213 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:16.373 00:12:16.373 real 0m5.696s 00:12:16.373 user 0m0.027s 00:12:16.373 sys 0m0.074s 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:16.373 ************************************ 00:12:16.373 END TEST filesystem_in_capsule_ext4 00:12:16.373 ************************************ 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.373 ************************************ 00:12:16.373 START TEST filesystem_in_capsule_btrfs 00:12:16.373 ************************************ 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:16.373 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:16.632 btrfs-progs v6.8.1 00:12:16.632 See https://btrfs.readthedocs.io for more information. 00:12:16.632 00:12:16.632 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:16.632 NOTE: several default settings have changed in version 5.15, please make sure 00:12:16.632 this does not affect your deployments: 00:12:16.632 - DUP for metadata (-m dup) 00:12:16.632 - enabled no-holes (-O no-holes) 00:12:16.632 - enabled free-space-tree (-R free-space-tree) 00:12:16.632 00:12:16.632 Label: (null) 00:12:16.632 UUID: be28a383-6a6a-4031-a722-f7bc3d60f8ca 00:12:16.632 Node size: 16384 00:12:16.632 Sector size: 4096 (CPU page size: 4096) 00:12:16.632 Filesystem size: 510.00MiB 00:12:16.632 Block group profiles: 00:12:16.632 Data: single 8.00MiB 00:12:16.632 Metadata: DUP 32.00MiB 00:12:16.632 System: DUP 8.00MiB 00:12:16.632 SSD detected: yes 00:12:16.632 Zoned device: no 00:12:16.632 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:16.632 Checksum: crc32c 00:12:16.632 Number of devices: 1 00:12:16.632 Devices: 00:12:16.632 ID SIZE PATH 00:12:16.632 1 510.00MiB /dev/nvme0n1p1 00:12:16.632 00:12:16.632 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:16.632 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.632 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.632 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:16.632 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.632 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:16.632 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:16.632 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1101213 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:16.891 00:12:16.891 real 0m0.412s 00:12:16.891 user 0m0.028s 00:12:16.891 sys 0m0.113s 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:16.891 ************************************ 00:12:16.891 END TEST filesystem_in_capsule_btrfs 00:12:16.891 ************************************ 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.891 ************************************ 00:12:16.891 START TEST filesystem_in_capsule_xfs 00:12:16.891 ************************************ 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:16.891 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:16.891 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:16.891 = sectsz=512 attr=2, projid32bit=1 00:12:16.891 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:16.891 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:16.891 data = bsize=4096 blocks=130560, imaxpct=25 00:12:16.891 = sunit=0 swidth=0 blks 00:12:16.891 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:16.891 log =internal log bsize=4096 blocks=16384, version=2 00:12:16.891 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:16.891 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:17.826 Discarding blocks...Done. 00:12:17.826 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:17.826 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1101213 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:19.728 00:12:19.728 real 0m2.682s 00:12:19.728 user 0m0.022s 00:12:19.728 sys 0m0.077s 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:19.728 ************************************ 00:12:19.728 END TEST filesystem_in_capsule_xfs 00:12:19.728 ************************************ 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:19.728 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1101213 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1101213 ']' 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1101213 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1101213 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1101213' 00:12:19.987 killing process with pid 1101213 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1101213 00:12:19.987 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1101213 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:20.555 00:12:20.555 real 0m15.492s 00:12:20.555 user 1m1.058s 00:12:20.555 sys 0m1.406s 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.555 ************************************ 00:12:20.555 END TEST nvmf_filesystem_in_capsule 00:12:20.555 ************************************ 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.555 rmmod nvme_tcp 00:12:20.555 rmmod nvme_fabrics 00:12:20.555 rmmod nvme_keyring 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.555 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.465 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:22.465 00:12:22.465 real 0m40.576s 00:12:22.465 user 2m7.490s 00:12:22.465 sys 0m7.531s 00:12:22.465 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.465 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:22.465 ************************************ 00:12:22.465 END TEST nvmf_filesystem 00:12:22.465 ************************************ 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:22.724 ************************************ 00:12:22.724 START TEST nvmf_target_discovery 00:12:22.724 ************************************ 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:22.724 * Looking for test storage... 00:12:22.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.724 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:22.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.725 --rc genhtml_branch_coverage=1 00:12:22.725 --rc genhtml_function_coverage=1 00:12:22.725 --rc genhtml_legend=1 00:12:22.725 --rc geninfo_all_blocks=1 00:12:22.725 --rc geninfo_unexecuted_blocks=1 00:12:22.725 00:12:22.725 ' 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:22.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.725 --rc genhtml_branch_coverage=1 00:12:22.725 --rc genhtml_function_coverage=1 00:12:22.725 --rc genhtml_legend=1 00:12:22.725 --rc geninfo_all_blocks=1 00:12:22.725 --rc geninfo_unexecuted_blocks=1 00:12:22.725 00:12:22.725 ' 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:22.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.725 --rc genhtml_branch_coverage=1 00:12:22.725 --rc genhtml_function_coverage=1 00:12:22.725 --rc genhtml_legend=1 00:12:22.725 --rc geninfo_all_blocks=1 00:12:22.725 --rc geninfo_unexecuted_blocks=1 00:12:22.725 00:12:22.725 ' 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:22.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.725 --rc genhtml_branch_coverage=1 00:12:22.725 --rc genhtml_function_coverage=1 00:12:22.725 --rc genhtml_legend=1 00:12:22.725 --rc geninfo_all_blocks=1 00:12:22.725 --rc geninfo_unexecuted_blocks=1 00:12:22.725 00:12:22.725 ' 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:22.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.725 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.985 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:22.985 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:22.985 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:22.985 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:29.555 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:29.555 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:29.555 Found net devices under 0000:af:00.0: cvl_0_0 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.555 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:29.556 Found net devices under 0000:af:00.1: cvl_0_1 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:12:29.556 00:12:29.556 --- 10.0.0.2 ping statistics --- 00:12:29.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.556 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:12:29.556 00:12:29.556 --- 10.0.0.1 ping statistics --- 00:12:29.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.556 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1107356 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1107356 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1107356 ']' 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.556 [2024-12-10 05:37:16.559676] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:12:29.556 [2024-12-10 05:37:16.559728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.556 [2024-12-10 05:37:16.638995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.556 [2024-12-10 05:37:16.679240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.556 [2024-12-10 05:37:16.679279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.556 [2024-12-10 05:37:16.679287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.556 [2024-12-10 05:37:16.679292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.556 [2024-12-10 05:37:16.679297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.556 [2024-12-10 05:37:16.680743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.556 [2024-12-10 05:37:16.680850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.556 [2024-12-10 05:37:16.680960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.556 [2024-12-10 05:37:16.680961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.556 [2024-12-10 05:37:16.830731] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.556 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.556 Null1 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 [2024-12-10 05:37:16.884321] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 Null2 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 Null3 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 Null4 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.557 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.557 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.557 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:29.557 00:12:29.557 Discovery Log Number of Records 6, Generation counter 6 00:12:29.557 =====Discovery Log Entry 0====== 00:12:29.557 trtype: tcp 00:12:29.557 adrfam: ipv4 00:12:29.557 subtype: current discovery subsystem 00:12:29.557 treq: not required 00:12:29.557 portid: 0 00:12:29.557 trsvcid: 4420 00:12:29.557 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:29.557 traddr: 10.0.0.2 00:12:29.557 eflags: explicit discovery connections, duplicate discovery information 00:12:29.557 sectype: none 00:12:29.557 =====Discovery Log Entry 1====== 00:12:29.557 trtype: tcp 00:12:29.557 adrfam: ipv4 00:12:29.557 subtype: nvme subsystem 00:12:29.557 treq: not required 00:12:29.557 portid: 0 00:12:29.557 trsvcid: 4420 00:12:29.557 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:29.557 traddr: 10.0.0.2 00:12:29.557 eflags: none 00:12:29.557 sectype: none 00:12:29.557 =====Discovery Log Entry 2====== 00:12:29.557 trtype: tcp 00:12:29.557 adrfam: ipv4 00:12:29.557 subtype: nvme subsystem 00:12:29.557 treq: not required 00:12:29.557 portid: 0 00:12:29.557 trsvcid: 4420 00:12:29.557 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:29.557 traddr: 10.0.0.2 00:12:29.557 eflags: none 00:12:29.557 sectype: none 00:12:29.557 =====Discovery Log Entry 3====== 00:12:29.557 trtype: tcp 00:12:29.557 adrfam: ipv4 00:12:29.557 subtype: nvme subsystem 00:12:29.557 treq: not required 00:12:29.557 portid: 0 00:12:29.557 trsvcid: 4420 00:12:29.557 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:29.557 traddr: 10.0.0.2 00:12:29.557 eflags: none 00:12:29.557 sectype: none 00:12:29.557 =====Discovery Log Entry 4====== 00:12:29.557 trtype: tcp 00:12:29.557 adrfam: ipv4 00:12:29.557 subtype: nvme subsystem 00:12:29.557 treq: not required 00:12:29.557 portid: 0 00:12:29.557 trsvcid: 4420 00:12:29.557 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:29.557 traddr: 10.0.0.2 00:12:29.557 eflags: none 00:12:29.557 sectype: none 00:12:29.557 =====Discovery Log Entry 5====== 00:12:29.557 trtype: tcp 00:12:29.557 adrfam: ipv4 00:12:29.557 subtype: discovery subsystem referral 00:12:29.557 treq: not required 00:12:29.557 portid: 0 00:12:29.557 trsvcid: 4430 00:12:29.558 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:29.558 traddr: 10.0.0.2 00:12:29.558 eflags: none 00:12:29.558 sectype: none 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:29.558 Perform nvmf subsystem discovery via RPC 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.558 [ 00:12:29.558 { 00:12:29.558 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.558 "subtype": "Discovery", 00:12:29.558 "listen_addresses": [ 00:12:29.558 { 00:12:29.558 "trtype": "TCP", 00:12:29.558 "adrfam": "IPv4", 00:12:29.558 "traddr": "10.0.0.2", 00:12:29.558 "trsvcid": "4420" 00:12:29.558 } 00:12:29.558 ], 00:12:29.558 "allow_any_host": true, 00:12:29.558 "hosts": [] 00:12:29.558 }, 00:12:29.558 { 00:12:29.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:29.558 "subtype": "NVMe", 00:12:29.558 "listen_addresses": [ 00:12:29.558 { 00:12:29.558 "trtype": "TCP", 00:12:29.558 "adrfam": "IPv4", 00:12:29.558 "traddr": "10.0.0.2", 00:12:29.558 "trsvcid": "4420" 00:12:29.558 } 00:12:29.558 ], 00:12:29.558 "allow_any_host": true, 00:12:29.558 "hosts": [], 00:12:29.558 "serial_number": "SPDK00000000000001", 00:12:29.558 "model_number": "SPDK bdev Controller", 00:12:29.558 "max_namespaces": 32, 00:12:29.558 "min_cntlid": 1, 00:12:29.558 "max_cntlid": 65519, 00:12:29.558 "namespaces": [ 00:12:29.558 { 00:12:29.558 "nsid": 1, 00:12:29.558 "bdev_name": "Null1", 00:12:29.558 "name": "Null1", 00:12:29.558 "nguid": "07A55E8A3B3749129DC9C922624983FA", 00:12:29.558 "uuid": "07a55e8a-3b37-4912-9dc9-c922624983fa" 00:12:29.558 } 00:12:29.558 ] 00:12:29.558 }, 00:12:29.558 { 00:12:29.558 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:29.558 "subtype": "NVMe", 00:12:29.558 "listen_addresses": [ 00:12:29.558 { 00:12:29.558 "trtype": "TCP", 00:12:29.558 "adrfam": "IPv4", 00:12:29.558 "traddr": "10.0.0.2", 00:12:29.558 "trsvcid": "4420" 00:12:29.558 } 00:12:29.558 ], 00:12:29.558 "allow_any_host": true, 00:12:29.558 "hosts": [], 00:12:29.558 "serial_number": "SPDK00000000000002", 00:12:29.558 "model_number": "SPDK bdev Controller", 00:12:29.558 "max_namespaces": 32, 00:12:29.558 "min_cntlid": 1, 00:12:29.558 "max_cntlid": 65519, 00:12:29.558 "namespaces": [ 00:12:29.558 { 00:12:29.558 "nsid": 1, 00:12:29.558 "bdev_name": "Null2", 00:12:29.558 "name": "Null2", 00:12:29.558 "nguid": "19013BEA55344B54BD2DAEC50FE9803D", 00:12:29.558 "uuid": "19013bea-5534-4b54-bd2d-aec50fe9803d" 00:12:29.558 } 00:12:29.558 ] 00:12:29.558 }, 00:12:29.558 { 00:12:29.558 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:29.558 "subtype": "NVMe", 00:12:29.558 "listen_addresses": [ 00:12:29.558 { 00:12:29.558 "trtype": "TCP", 00:12:29.558 "adrfam": "IPv4", 00:12:29.558 "traddr": "10.0.0.2", 00:12:29.558 "trsvcid": "4420" 00:12:29.558 } 00:12:29.558 ], 00:12:29.558 "allow_any_host": true, 00:12:29.558 "hosts": [], 00:12:29.558 "serial_number": "SPDK00000000000003", 00:12:29.558 "model_number": "SPDK bdev Controller", 00:12:29.558 "max_namespaces": 32, 00:12:29.558 "min_cntlid": 1, 00:12:29.558 "max_cntlid": 65519, 00:12:29.558 "namespaces": [ 00:12:29.558 { 00:12:29.558 "nsid": 1, 00:12:29.558 "bdev_name": "Null3", 00:12:29.558 "name": "Null3", 00:12:29.558 "nguid": "24F2A1D863054C89B5A48FC68FED23A5", 00:12:29.558 "uuid": "24f2a1d8-6305-4c89-b5a4-8fc68fed23a5" 00:12:29.558 } 00:12:29.558 ] 00:12:29.558 }, 00:12:29.558 { 00:12:29.558 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:29.558 "subtype": "NVMe", 00:12:29.558 "listen_addresses": [ 00:12:29.558 { 00:12:29.558 "trtype": "TCP", 00:12:29.558 "adrfam": "IPv4", 00:12:29.558 "traddr": "10.0.0.2", 00:12:29.558 "trsvcid": "4420" 00:12:29.558 } 00:12:29.558 ], 00:12:29.558 "allow_any_host": true, 00:12:29.558 "hosts": [], 00:12:29.558 "serial_number": "SPDK00000000000004", 00:12:29.558 "model_number": "SPDK bdev Controller", 00:12:29.558 "max_namespaces": 32, 00:12:29.558 "min_cntlid": 1, 00:12:29.558 "max_cntlid": 65519, 00:12:29.558 "namespaces": [ 00:12:29.558 { 00:12:29.558 "nsid": 1, 00:12:29.558 "bdev_name": "Null4", 00:12:29.558 "name": "Null4", 00:12:29.558 "nguid": "554D8099AB6F432DADBAD1A9BD3B7CCA", 00:12:29.558 "uuid": "554d8099-ab6f-432d-adba-d1a9bd3b7cca" 00:12:29.558 } 00:12:29.558 ] 00:12:29.558 } 00:12:29.558 ] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:29.558 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.559 rmmod nvme_tcp 00:12:29.559 rmmod nvme_fabrics 00:12:29.559 rmmod nvme_keyring 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1107356 ']' 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1107356 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1107356 ']' 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1107356 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.559 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1107356 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1107356' 00:12:29.818 killing process with pid 1107356 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1107356 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1107356 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.818 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.356 00:12:32.356 real 0m9.302s 00:12:32.356 user 0m5.667s 00:12:32.356 sys 0m4.757s 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.356 ************************************ 00:12:32.356 END TEST nvmf_target_discovery 00:12:32.356 ************************************ 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.356 ************************************ 00:12:32.356 START TEST nvmf_referrals 00:12:32.356 ************************************ 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:32.356 * Looking for test storage... 00:12:32.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.356 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:32.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.357 --rc genhtml_branch_coverage=1 00:12:32.357 --rc genhtml_function_coverage=1 00:12:32.357 --rc genhtml_legend=1 00:12:32.357 --rc geninfo_all_blocks=1 00:12:32.357 --rc geninfo_unexecuted_blocks=1 00:12:32.357 00:12:32.357 ' 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:32.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.357 --rc genhtml_branch_coverage=1 00:12:32.357 --rc genhtml_function_coverage=1 00:12:32.357 --rc genhtml_legend=1 00:12:32.357 --rc geninfo_all_blocks=1 00:12:32.357 --rc geninfo_unexecuted_blocks=1 00:12:32.357 00:12:32.357 ' 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:32.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.357 --rc genhtml_branch_coverage=1 00:12:32.357 --rc genhtml_function_coverage=1 00:12:32.357 --rc genhtml_legend=1 00:12:32.357 --rc geninfo_all_blocks=1 00:12:32.357 --rc geninfo_unexecuted_blocks=1 00:12:32.357 00:12:32.357 ' 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:32.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.357 --rc genhtml_branch_coverage=1 00:12:32.357 --rc genhtml_function_coverage=1 00:12:32.357 --rc genhtml_legend=1 00:12:32.357 --rc geninfo_all_blocks=1 00:12:32.357 --rc geninfo_unexecuted_blocks=1 00:12:32.357 00:12:32.357 ' 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.357 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:38.929 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:38.929 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:38.929 Found net devices under 0000:af:00.0: cvl_0_0 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:38.929 Found net devices under 0000:af:00.1: cvl_0_1 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:38.929 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:38.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:12:38.930 00:12:38.930 --- 10.0.0.2 ping statistics --- 00:12:38.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.930 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:12:38.930 00:12:38.930 --- 10.0.0.1 ping statistics --- 00:12:38.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.930 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:38.930 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1111094 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1111094 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1111094 ']' 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.930 [2024-12-10 05:37:26.081510] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:12:38.930 [2024-12-10 05:37:26.081552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.930 [2024-12-10 05:37:26.159930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.930 [2024-12-10 05:37:26.200887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.930 [2024-12-10 05:37:26.200922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.930 [2024-12-10 05:37:26.200932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.930 [2024-12-10 05:37:26.200939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.930 [2024-12-10 05:37:26.200945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.930 [2024-12-10 05:37:26.202437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.930 [2024-12-10 05:37:26.202550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.930 [2024-12-10 05:37:26.202656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.930 [2024-12-10 05:37:26.202657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.930 [2024-12-10 05:37:26.340324] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.930 [2024-12-10 05:37:26.366319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:38.930 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:38.931 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:39.190 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:39.190 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:39.190 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:39.190 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.190 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.190 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:39.449 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:39.449 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:39.449 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:39.449 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:39.449 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:39.449 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.449 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:39.708 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:39.708 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:39.708 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:39.708 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:39.708 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.708 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.967 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:40.226 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:40.226 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:40.226 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:40.226 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:40.226 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.226 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.485 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.744 rmmod nvme_tcp 00:12:40.744 rmmod nvme_fabrics 00:12:40.744 rmmod nvme_keyring 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1111094 ']' 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1111094 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1111094 ']' 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1111094 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1111094 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1111094' 00:12:40.744 killing process with pid 1111094 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1111094 00:12:40.744 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1111094 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.004 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.910 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:43.170 00:12:43.170 real 0m11.033s 00:12:43.170 user 0m12.632s 00:12:43.170 sys 0m5.205s 00:12:43.170 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.170 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.170 ************************************ 00:12:43.170 END TEST nvmf_referrals 00:12:43.170 ************************************ 00:12:43.170 05:37:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:43.170 05:37:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.170 05:37:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.170 05:37:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.170 ************************************ 00:12:43.170 START TEST nvmf_connect_disconnect 00:12:43.170 ************************************ 00:12:43.170 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:43.170 * Looking for test storage... 00:12:43.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.170 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:43.170 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:43.170 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:43.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.170 --rc genhtml_branch_coverage=1 00:12:43.170 --rc genhtml_function_coverage=1 00:12:43.170 --rc genhtml_legend=1 00:12:43.170 --rc geninfo_all_blocks=1 00:12:43.170 --rc geninfo_unexecuted_blocks=1 00:12:43.170 00:12:43.170 ' 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:43.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.170 --rc genhtml_branch_coverage=1 00:12:43.170 --rc genhtml_function_coverage=1 00:12:43.170 --rc genhtml_legend=1 00:12:43.170 --rc geninfo_all_blocks=1 00:12:43.170 --rc geninfo_unexecuted_blocks=1 00:12:43.170 00:12:43.170 ' 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:43.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.170 --rc genhtml_branch_coverage=1 00:12:43.170 --rc genhtml_function_coverage=1 00:12:43.170 --rc genhtml_legend=1 00:12:43.170 --rc geninfo_all_blocks=1 00:12:43.170 --rc geninfo_unexecuted_blocks=1 00:12:43.170 00:12:43.170 ' 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:43.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.170 --rc genhtml_branch_coverage=1 00:12:43.170 --rc genhtml_function_coverage=1 00:12:43.170 --rc genhtml_legend=1 00:12:43.170 --rc geninfo_all_blocks=1 00:12:43.170 --rc geninfo_unexecuted_blocks=1 00:12:43.170 00:12:43.170 ' 00:12:43.170 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.430 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.431 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:50.109 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:50.109 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.109 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:50.110 Found net devices under 0000:af:00.0: cvl_0_0 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:50.110 Found net devices under 0000:af:00.1: cvl_0_1 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:50.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:12:50.110 00:12:50.110 --- 10.0.0.2 ping statistics --- 00:12:50.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.110 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:12:50.110 00:12:50.110 --- 10.0.0.1 ping statistics --- 00:12:50.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.110 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:50.110 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1115104 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1115104 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1115104 ']' 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:50.110 [2024-12-10 05:37:37.099811] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:12:50.110 [2024-12-10 05:37:37.099863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.110 [2024-12-10 05:37:37.179724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.110 [2024-12-10 05:37:37.221070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.110 [2024-12-10 05:37:37.221107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.110 [2024-12-10 05:37:37.221117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.110 [2024-12-10 05:37:37.221125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.110 [2024-12-10 05:37:37.221131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.110 [2024-12-10 05:37:37.222597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.110 [2024-12-10 05:37:37.222709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.110 [2024-12-10 05:37:37.222813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.110 [2024-12-10 05:37:37.222815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:50.110 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:50.111 [2024-12-10 05:37:37.360424] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:50.111 [2024-12-10 05:37:37.425614] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:50.111 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:53.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.763 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:05.763 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:05.763 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.763 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:05.763 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.763 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:05.763 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.763 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.763 rmmod nvme_tcp 00:13:06.021 rmmod nvme_fabrics 00:13:06.021 rmmod nvme_keyring 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1115104 ']' 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1115104 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1115104 ']' 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1115104 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1115104 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1115104' 00:13:06.021 killing process with pid 1115104 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1115104 00:13:06.021 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1115104 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.281 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.187 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:08.187 00:13:08.187 real 0m25.133s 00:13:08.187 user 1m8.057s 00:13:08.187 sys 0m5.806s 00:13:08.187 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.187 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:08.187 ************************************ 00:13:08.187 END TEST nvmf_connect_disconnect 00:13:08.187 ************************************ 00:13:08.187 05:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:08.187 05:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:08.187 05:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.187 05:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:08.446 ************************************ 00:13:08.446 START TEST nvmf_multitarget 00:13:08.446 ************************************ 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:08.446 * Looking for test storage... 00:13:08.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:08.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.446 --rc genhtml_branch_coverage=1 00:13:08.446 --rc genhtml_function_coverage=1 00:13:08.446 --rc genhtml_legend=1 00:13:08.446 --rc geninfo_all_blocks=1 00:13:08.446 --rc geninfo_unexecuted_blocks=1 00:13:08.446 00:13:08.446 ' 00:13:08.446 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:08.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.446 --rc genhtml_branch_coverage=1 00:13:08.446 --rc genhtml_function_coverage=1 00:13:08.447 --rc genhtml_legend=1 00:13:08.447 --rc geninfo_all_blocks=1 00:13:08.447 --rc geninfo_unexecuted_blocks=1 00:13:08.447 00:13:08.447 ' 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:08.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.447 --rc genhtml_branch_coverage=1 00:13:08.447 --rc genhtml_function_coverage=1 00:13:08.447 --rc genhtml_legend=1 00:13:08.447 --rc geninfo_all_blocks=1 00:13:08.447 --rc geninfo_unexecuted_blocks=1 00:13:08.447 00:13:08.447 ' 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:08.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.447 --rc genhtml_branch_coverage=1 00:13:08.447 --rc genhtml_function_coverage=1 00:13:08.447 --rc genhtml_legend=1 00:13:08.447 --rc geninfo_all_blocks=1 00:13:08.447 --rc geninfo_unexecuted_blocks=1 00:13:08.447 00:13:08.447 ' 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:08.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:08.447 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.015 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:15.016 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:15.016 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:15.016 Found net devices under 0000:af:00.0: cvl_0_0 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:15.016 Found net devices under 0000:af:00.1: cvl_0_1 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.016 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:13:15.016 00:13:15.016 --- 10.0.0.2 ping statistics --- 00:13:15.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.016 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:13:15.016 00:13:15.016 --- 10.0.0.1 ping statistics --- 00:13:15.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.016 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:15.016 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1121361 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1121361 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1121361 ']' 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.017 [2024-12-10 05:38:02.266272] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:13:15.017 [2024-12-10 05:38:02.266318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.017 [2024-12-10 05:38:02.344650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.017 [2024-12-10 05:38:02.387052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.017 [2024-12-10 05:38:02.387089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.017 [2024-12-10 05:38:02.387097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.017 [2024-12-10 05:38:02.387103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.017 [2024-12-10 05:38:02.387109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.017 [2024-12-10 05:38:02.388579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.017 [2024-12-10 05:38:02.388689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.017 [2024-12-10 05:38:02.388811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.017 [2024-12-10 05:38:02.388812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:15.017 "nvmf_tgt_1" 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:15.017 "nvmf_tgt_2" 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:15.017 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:15.275 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:15.275 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:15.275 true 00:13:15.275 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:15.532 true 00:13:15.532 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:15.532 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.533 rmmod nvme_tcp 00:13:15.533 rmmod nvme_fabrics 00:13:15.533 rmmod nvme_keyring 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1121361 ']' 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1121361 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1121361 ']' 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1121361 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.533 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1121361 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1121361' 00:13:15.792 killing process with pid 1121361 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1121361 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1121361 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.792 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:18.329 00:13:18.329 real 0m9.581s 00:13:18.329 user 0m7.335s 00:13:18.329 sys 0m4.869s 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:18.329 ************************************ 00:13:18.329 END TEST nvmf_multitarget 00:13:18.329 ************************************ 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.329 ************************************ 00:13:18.329 START TEST nvmf_rpc 00:13:18.329 ************************************ 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:18.329 * Looking for test storage... 00:13:18.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:18.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.329 --rc genhtml_branch_coverage=1 00:13:18.329 --rc genhtml_function_coverage=1 00:13:18.329 --rc genhtml_legend=1 00:13:18.329 --rc geninfo_all_blocks=1 00:13:18.329 --rc geninfo_unexecuted_blocks=1 00:13:18.329 00:13:18.329 ' 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:18.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.329 --rc genhtml_branch_coverage=1 00:13:18.329 --rc genhtml_function_coverage=1 00:13:18.329 --rc genhtml_legend=1 00:13:18.329 --rc geninfo_all_blocks=1 00:13:18.329 --rc geninfo_unexecuted_blocks=1 00:13:18.329 00:13:18.329 ' 00:13:18.329 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:18.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.329 --rc genhtml_branch_coverage=1 00:13:18.329 --rc genhtml_function_coverage=1 00:13:18.329 --rc genhtml_legend=1 00:13:18.329 --rc geninfo_all_blocks=1 00:13:18.329 --rc geninfo_unexecuted_blocks=1 00:13:18.329 00:13:18.329 ' 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:18.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.330 --rc genhtml_branch_coverage=1 00:13:18.330 --rc genhtml_function_coverage=1 00:13:18.330 --rc genhtml_legend=1 00:13:18.330 --rc geninfo_all_blocks=1 00:13:18.330 --rc geninfo_unexecuted_blocks=1 00:13:18.330 00:13:18.330 ' 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:18.330 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:24.902 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:24.902 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:24.902 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:24.903 Found net devices under 0000:af:00.0: cvl_0_0 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:24.903 Found net devices under 0000:af:00.1: cvl_0_1 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:24.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:13:24.903 00:13:24.903 --- 10.0.0.2 ping statistics --- 00:13:24.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.903 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:13:24.903 00:13:24.903 --- 10.0.0.1 ping statistics --- 00:13:24.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.903 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1125086 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1125086 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1125086 ']' 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.903 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.903 [2024-12-10 05:38:12.021105] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:13:24.903 [2024-12-10 05:38:12.021147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.903 [2024-12-10 05:38:12.087058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.903 [2024-12-10 05:38:12.128272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.903 [2024-12-10 05:38:12.128307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.903 [2024-12-10 05:38:12.128314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.903 [2024-12-10 05:38:12.128320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.903 [2024-12-10 05:38:12.128325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.903 [2024-12-10 05:38:12.133187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.903 [2024-12-10 05:38:12.133228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.903 [2024-12-10 05:38:12.133337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.903 [2024-12-10 05:38:12.133338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.903 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.903 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:24.903 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:24.903 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:24.903 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.903 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.903 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:24.903 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.903 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.903 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.903 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:24.903 "tick_rate": 2100000000, 00:13:24.903 "poll_groups": [ 00:13:24.903 { 00:13:24.903 "name": "nvmf_tgt_poll_group_000", 00:13:24.903 "admin_qpairs": 0, 00:13:24.903 "io_qpairs": 0, 00:13:24.903 "current_admin_qpairs": 0, 00:13:24.903 "current_io_qpairs": 0, 00:13:24.903 "pending_bdev_io": 0, 00:13:24.903 "completed_nvme_io": 0, 00:13:24.903 "transports": [] 00:13:24.903 }, 00:13:24.903 { 00:13:24.903 "name": "nvmf_tgt_poll_group_001", 00:13:24.903 "admin_qpairs": 0, 00:13:24.903 "io_qpairs": 0, 00:13:24.903 "current_admin_qpairs": 0, 00:13:24.903 "current_io_qpairs": 0, 00:13:24.903 "pending_bdev_io": 0, 00:13:24.903 "completed_nvme_io": 0, 00:13:24.903 "transports": [] 00:13:24.903 }, 00:13:24.903 { 00:13:24.903 "name": "nvmf_tgt_poll_group_002", 00:13:24.903 "admin_qpairs": 0, 00:13:24.903 "io_qpairs": 0, 00:13:24.903 "current_admin_qpairs": 0, 00:13:24.903 "current_io_qpairs": 0, 00:13:24.903 "pending_bdev_io": 0, 00:13:24.903 "completed_nvme_io": 0, 00:13:24.903 "transports": [] 00:13:24.903 }, 00:13:24.903 { 00:13:24.903 "name": "nvmf_tgt_poll_group_003", 00:13:24.904 "admin_qpairs": 0, 00:13:24.904 "io_qpairs": 0, 00:13:24.904 "current_admin_qpairs": 0, 00:13:24.904 "current_io_qpairs": 0, 00:13:24.904 "pending_bdev_io": 0, 00:13:24.904 "completed_nvme_io": 0, 00:13:24.904 "transports": [] 00:13:24.904 } 00:13:24.904 ] 00:13:24.904 }' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.904 [2024-12-10 05:38:12.379332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:24.904 "tick_rate": 2100000000, 00:13:24.904 "poll_groups": [ 00:13:24.904 { 00:13:24.904 "name": "nvmf_tgt_poll_group_000", 00:13:24.904 "admin_qpairs": 0, 00:13:24.904 "io_qpairs": 0, 00:13:24.904 "current_admin_qpairs": 0, 00:13:24.904 "current_io_qpairs": 0, 00:13:24.904 "pending_bdev_io": 0, 00:13:24.904 "completed_nvme_io": 0, 00:13:24.904 "transports": [ 00:13:24.904 { 00:13:24.904 "trtype": "TCP" 00:13:24.904 } 00:13:24.904 ] 00:13:24.904 }, 00:13:24.904 { 00:13:24.904 "name": "nvmf_tgt_poll_group_001", 00:13:24.904 "admin_qpairs": 0, 00:13:24.904 "io_qpairs": 0, 00:13:24.904 "current_admin_qpairs": 0, 00:13:24.904 "current_io_qpairs": 0, 00:13:24.904 "pending_bdev_io": 0, 00:13:24.904 "completed_nvme_io": 0, 00:13:24.904 "transports": [ 00:13:24.904 { 00:13:24.904 "trtype": "TCP" 00:13:24.904 } 00:13:24.904 ] 00:13:24.904 }, 00:13:24.904 { 00:13:24.904 "name": "nvmf_tgt_poll_group_002", 00:13:24.904 "admin_qpairs": 0, 00:13:24.904 "io_qpairs": 0, 00:13:24.904 "current_admin_qpairs": 0, 00:13:24.904 "current_io_qpairs": 0, 00:13:24.904 "pending_bdev_io": 0, 00:13:24.904 "completed_nvme_io": 0, 00:13:24.904 "transports": [ 00:13:24.904 { 00:13:24.904 "trtype": "TCP" 00:13:24.904 } 00:13:24.904 ] 00:13:24.904 }, 00:13:24.904 { 00:13:24.904 "name": "nvmf_tgt_poll_group_003", 00:13:24.904 "admin_qpairs": 0, 00:13:24.904 "io_qpairs": 0, 00:13:24.904 "current_admin_qpairs": 0, 00:13:24.904 "current_io_qpairs": 0, 00:13:24.904 "pending_bdev_io": 0, 00:13:24.904 "completed_nvme_io": 0, 00:13:24.904 "transports": [ 00:13:24.904 { 00:13:24.904 "trtype": "TCP" 00:13:24.904 } 00:13:24.904 ] 00:13:24.904 } 00:13:24.904 ] 00:13:24.904 }' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.904 Malloc1 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.904 [2024-12-10 05:38:12.559117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:24.904 [2024-12-10 05:38:12.587759] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:13:24.904 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:24.904 could not add new controller: failed to write to nvme-fabrics device 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.904 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.278 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.278 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:26.278 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.278 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:26.278 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.178 [2024-12-10 05:38:15.963776] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:13:28.178 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:28.178 could not add new controller: failed to write to nvme-fabrics device 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.178 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.551 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.551 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:29.551 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.551 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:29.551 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.450 [2024-12-10 05:38:19.331791] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.450 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.708 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.708 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:31.708 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.708 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.708 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.708 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.641 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:32.641 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:32.641 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.641 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:32.641 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.169 [2024-12-10 05:38:22.634966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.169 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.102 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.102 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:36.102 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.103 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:36.103 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:37.999 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:37.999 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:37.999 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.999 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:37.999 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.999 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:37.999 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.257 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:38.257 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:38.257 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:38.257 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.257 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:38.257 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.257 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:38.257 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.257 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.257 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.257 [2024-12-10 05:38:26.030472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.257 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:39.630 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:39.630 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:39.630 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.630 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:39.630 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.529 [2024-12-10 05:38:29.382335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.529 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.902 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.902 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:42.902 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.902 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:42.902 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.801 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.060 [2024-12-10 05:38:32.708594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.060 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.436 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.436 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:46.436 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.436 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:46.436 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:48.357 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:48.357 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:48.357 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.357 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:48.357 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.357 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:48.357 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.357 [2024-12-10 05:38:36.086709] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.357 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 [2024-12-10 05:38:36.134794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 [2024-12-10 05:38:36.182945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 [2024-12-10 05:38:36.231100] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.358 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.628 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.628 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.628 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.628 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.628 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.628 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.628 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.628 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.628 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.629 [2024-12-10 05:38:36.279306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:48.629 "tick_rate": 2100000000, 00:13:48.629 "poll_groups": [ 00:13:48.629 { 00:13:48.629 "name": "nvmf_tgt_poll_group_000", 00:13:48.629 "admin_qpairs": 2, 00:13:48.629 "io_qpairs": 168, 00:13:48.629 "current_admin_qpairs": 0, 00:13:48.629 "current_io_qpairs": 0, 00:13:48.629 "pending_bdev_io": 0, 00:13:48.629 "completed_nvme_io": 294, 00:13:48.629 "transports": [ 00:13:48.629 { 00:13:48.629 "trtype": "TCP" 00:13:48.629 } 00:13:48.629 ] 00:13:48.629 }, 00:13:48.629 { 00:13:48.629 "name": "nvmf_tgt_poll_group_001", 00:13:48.629 "admin_qpairs": 2, 00:13:48.629 "io_qpairs": 168, 00:13:48.629 "current_admin_qpairs": 0, 00:13:48.629 "current_io_qpairs": 0, 00:13:48.629 "pending_bdev_io": 0, 00:13:48.629 "completed_nvme_io": 262, 00:13:48.629 "transports": [ 00:13:48.629 { 00:13:48.629 "trtype": "TCP" 00:13:48.629 } 00:13:48.629 ] 00:13:48.629 }, 00:13:48.629 { 00:13:48.629 "name": "nvmf_tgt_poll_group_002", 00:13:48.629 "admin_qpairs": 1, 00:13:48.629 "io_qpairs": 168, 00:13:48.629 "current_admin_qpairs": 0, 00:13:48.629 "current_io_qpairs": 0, 00:13:48.629 "pending_bdev_io": 0, 00:13:48.629 "completed_nvme_io": 197, 00:13:48.629 "transports": [ 00:13:48.629 { 00:13:48.629 "trtype": "TCP" 00:13:48.629 } 00:13:48.629 ] 00:13:48.629 }, 00:13:48.629 { 00:13:48.629 "name": "nvmf_tgt_poll_group_003", 00:13:48.629 "admin_qpairs": 2, 00:13:48.629 "io_qpairs": 168, 00:13:48.629 "current_admin_qpairs": 0, 00:13:48.629 "current_io_qpairs": 0, 00:13:48.629 "pending_bdev_io": 0, 00:13:48.629 "completed_nvme_io": 269, 00:13:48.629 "transports": [ 00:13:48.629 { 00:13:48.629 "trtype": "TCP" 00:13:48.629 } 00:13:48.629 ] 00:13:48.629 } 00:13:48.629 ] 00:13:48.629 }' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:48.629 rmmod nvme_tcp 00:13:48.629 rmmod nvme_fabrics 00:13:48.629 rmmod nvme_keyring 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1125086 ']' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1125086 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1125086 ']' 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1125086 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:48.629 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1125086 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1125086' 00:13:48.921 killing process with pid 1125086 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1125086 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1125086 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.921 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.455 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:51.455 00:13:51.455 real 0m33.075s 00:13:51.455 user 1m39.742s 00:13:51.455 sys 0m6.508s 00:13:51.455 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.455 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.455 ************************************ 00:13:51.455 END TEST nvmf_rpc 00:13:51.455 ************************************ 00:13:51.455 05:38:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:51.455 05:38:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:51.455 05:38:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.455 05:38:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.455 ************************************ 00:13:51.455 START TEST nvmf_invalid 00:13:51.455 ************************************ 00:13:51.455 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:51.455 * Looking for test storage... 00:13:51.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.455 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:51.455 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:51.455 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:51.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.455 --rc genhtml_branch_coverage=1 00:13:51.455 --rc genhtml_function_coverage=1 00:13:51.455 --rc genhtml_legend=1 00:13:51.455 --rc geninfo_all_blocks=1 00:13:51.455 --rc geninfo_unexecuted_blocks=1 00:13:51.455 00:13:51.455 ' 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:51.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.455 --rc genhtml_branch_coverage=1 00:13:51.455 --rc genhtml_function_coverage=1 00:13:51.455 --rc genhtml_legend=1 00:13:51.455 --rc geninfo_all_blocks=1 00:13:51.455 --rc geninfo_unexecuted_blocks=1 00:13:51.455 00:13:51.455 ' 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:51.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.455 --rc genhtml_branch_coverage=1 00:13:51.455 --rc genhtml_function_coverage=1 00:13:51.455 --rc genhtml_legend=1 00:13:51.455 --rc geninfo_all_blocks=1 00:13:51.455 --rc geninfo_unexecuted_blocks=1 00:13:51.455 00:13:51.455 ' 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:51.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.455 --rc genhtml_branch_coverage=1 00:13:51.455 --rc genhtml_function_coverage=1 00:13:51.455 --rc genhtml_legend=1 00:13:51.455 --rc geninfo_all_blocks=1 00:13:51.455 --rc geninfo_unexecuted_blocks=1 00:13:51.455 00:13:51.455 ' 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:51.455 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:51.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:51.456 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:58.029 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:58.029 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:58.029 Found net devices under 0000:af:00.0: cvl_0_0 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:58.029 Found net devices under 0000:af:00.1: cvl_0_1 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:58.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:13:58.029 00:13:58.029 --- 10.0.0.2 ping statistics --- 00:13:58.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.029 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:13:58.029 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:13:58.029 00:13:58.029 --- 10.0.0.1 ping statistics --- 00:13:58.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.029 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:13:58.030 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1132748 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1132748 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1132748 ']' 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:58.030 [2024-12-10 05:38:45.099801] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:13:58.030 [2024-12-10 05:38:45.099845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.030 [2024-12-10 05:38:45.179804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.030 [2024-12-10 05:38:45.221385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.030 [2024-12-10 05:38:45.221419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.030 [2024-12-10 05:38:45.221427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.030 [2024-12-10 05:38:45.221433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.030 [2024-12-10 05:38:45.221439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.030 [2024-12-10 05:38:45.222883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.030 [2024-12-10 05:38:45.222993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.030 [2024-12-10 05:38:45.223106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.030 [2024-12-10 05:38:45.223106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18275 00:13:58.030 [2024-12-10 05:38:45.545404] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:58.030 { 00:13:58.030 "nqn": "nqn.2016-06.io.spdk:cnode18275", 00:13:58.030 "tgt_name": "foobar", 00:13:58.030 "method": "nvmf_create_subsystem", 00:13:58.030 "req_id": 1 00:13:58.030 } 00:13:58.030 Got JSON-RPC error response 00:13:58.030 response: 00:13:58.030 { 00:13:58.030 "code": -32603, 00:13:58.030 "message": "Unable to find target foobar" 00:13:58.030 }' 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:58.030 { 00:13:58.030 "nqn": "nqn.2016-06.io.spdk:cnode18275", 00:13:58.030 "tgt_name": "foobar", 00:13:58.030 "method": "nvmf_create_subsystem", 00:13:58.030 "req_id": 1 00:13:58.030 } 00:13:58.030 Got JSON-RPC error response 00:13:58.030 response: 00:13:58.030 { 00:13:58.030 "code": -32603, 00:13:58.030 "message": "Unable to find target foobar" 00:13:58.030 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3924 00:13:58.030 [2024-12-10 05:38:45.742064] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3924: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:58.030 { 00:13:58.030 "nqn": "nqn.2016-06.io.spdk:cnode3924", 00:13:58.030 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:58.030 "method": "nvmf_create_subsystem", 00:13:58.030 "req_id": 1 00:13:58.030 } 00:13:58.030 Got JSON-RPC error response 00:13:58.030 response: 00:13:58.030 { 00:13:58.030 "code": -32602, 00:13:58.030 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:58.030 }' 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:58.030 { 00:13:58.030 "nqn": "nqn.2016-06.io.spdk:cnode3924", 00:13:58.030 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:58.030 "method": "nvmf_create_subsystem", 00:13:58.030 "req_id": 1 00:13:58.030 } 00:13:58.030 Got JSON-RPC error response 00:13:58.030 response: 00:13:58.030 { 00:13:58.030 "code": -32602, 00:13:58.030 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:58.030 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:58.030 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19982 00:13:58.289 [2024-12-10 05:38:45.942706] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19982: invalid model number 'SPDK_Controller' 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:58.289 { 00:13:58.289 "nqn": "nqn.2016-06.io.spdk:cnode19982", 00:13:58.289 "model_number": "SPDK_Controller\u001f", 00:13:58.289 "method": "nvmf_create_subsystem", 00:13:58.289 "req_id": 1 00:13:58.289 } 00:13:58.289 Got JSON-RPC error response 00:13:58.289 response: 00:13:58.289 { 00:13:58.289 "code": -32602, 00:13:58.289 "message": "Invalid MN SPDK_Controller\u001f" 00:13:58.289 }' 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:58.289 { 00:13:58.289 "nqn": "nqn.2016-06.io.spdk:cnode19982", 00:13:58.289 "model_number": "SPDK_Controller\u001f", 00:13:58.289 "method": "nvmf_create_subsystem", 00:13:58.289 "req_id": 1 00:13:58.289 } 00:13:58.289 Got JSON-RPC error response 00:13:58.289 response: 00:13:58.289 { 00:13:58.289 "code": -32602, 00:13:58.289 "message": "Invalid MN SPDK_Controller\u001f" 00:13:58.289 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:58.289 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ @ == \- ]] 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '@i!L[XK@6[}`Pqig_#Gk' 00:13:58.290 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '@i!L[XK@6[}`Pqig_#Gk' nqn.2016-06.io.spdk:cnode27590 00:13:58.548 [2024-12-10 05:38:46.267806] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27590: invalid serial number '@i!L[XK@6[}`Pqig_#Gk' 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:58.548 { 00:13:58.548 "nqn": "nqn.2016-06.io.spdk:cnode27590", 00:13:58.548 "serial_number": "@i!L[XK@6\u007f[}`Pqig_#Gk", 00:13:58.548 "method": "nvmf_create_subsystem", 00:13:58.548 "req_id": 1 00:13:58.548 } 00:13:58.548 Got JSON-RPC error response 00:13:58.548 response: 00:13:58.548 { 00:13:58.548 "code": -32602, 00:13:58.548 "message": "Invalid SN @i!L[XK@6\u007f[}`Pqig_#Gk" 00:13:58.548 }' 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:58.548 { 00:13:58.548 "nqn": "nqn.2016-06.io.spdk:cnode27590", 00:13:58.548 "serial_number": "@i!L[XK@6\u007f[}`Pqig_#Gk", 00:13:58.548 "method": "nvmf_create_subsystem", 00:13:58.548 "req_id": 1 00:13:58.548 } 00:13:58.548 Got JSON-RPC error response 00:13:58.548 response: 00:13:58.548 { 00:13:58.548 "code": -32602, 00:13:58.548 "message": "Invalid SN @i!L[XK@6\u007f[}`Pqig_#Gk" 00:13:58.548 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.548 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.549 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.807 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:58.807 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:58.807 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:58.807 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ j == \- ]] 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'j>G$KTB' 00:13:58.808 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'j>G$KTB' nqn.2016-06.io.spdk:cnode467 00:13:59.066 [2024-12-10 05:38:46.749385] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode467: invalid model number 'j>G$KTB' 00:13:59.066 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:59.066 { 00:13:59.066 "nqn": "nqn.2016-06.io.spdk:cnode467", 00:13:59.066 "model_number": "j>G$KTB", 00:13:59.066 "method": "nvmf_create_subsystem", 00:13:59.066 "req_id": 1 00:13:59.066 } 00:13:59.066 Got JSON-RPC error response 00:13:59.066 response: 00:13:59.066 { 00:13:59.066 "code": -32602, 00:13:59.066 "message": "Invalid MN j>G$KTB" 00:13:59.066 }' 00:13:59.066 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:59.066 { 00:13:59.066 "nqn": "nqn.2016-06.io.spdk:cnode467", 00:13:59.066 "model_number": "j>G$KTB", 00:13:59.066 "method": "nvmf_create_subsystem", 00:13:59.066 "req_id": 1 00:13:59.066 } 00:13:59.066 Got JSON-RPC error response 00:13:59.066 response: 00:13:59.066 { 00:13:59.066 "code": -32602, 00:13:59.066 "message": "Invalid MN j>G$KTB" 00:13:59.066 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:59.066 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:59.066 [2024-12-10 05:38:46.950083] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.324 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:59.324 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:59.324 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:59.324 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:59.324 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:59.324 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:59.581 [2024-12-10 05:38:47.351417] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:59.581 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:59.581 { 00:13:59.581 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:59.581 "listen_address": { 00:13:59.581 "trtype": "tcp", 00:13:59.581 "traddr": "", 00:13:59.581 "trsvcid": "4421" 00:13:59.581 }, 00:13:59.581 "method": "nvmf_subsystem_remove_listener", 00:13:59.581 "req_id": 1 00:13:59.581 } 00:13:59.581 Got JSON-RPC error response 00:13:59.581 response: 00:13:59.581 { 00:13:59.581 "code": -32602, 00:13:59.581 "message": "Invalid parameters" 00:13:59.581 }' 00:13:59.581 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:59.581 { 00:13:59.581 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:59.581 "listen_address": { 00:13:59.581 "trtype": "tcp", 00:13:59.581 "traddr": "", 00:13:59.581 "trsvcid": "4421" 00:13:59.581 }, 00:13:59.581 "method": "nvmf_subsystem_remove_listener", 00:13:59.581 "req_id": 1 00:13:59.581 } 00:13:59.581 Got JSON-RPC error response 00:13:59.581 response: 00:13:59.581 { 00:13:59.581 "code": -32602, 00:13:59.581 "message": "Invalid parameters" 00:13:59.581 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:59.581 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26129 -i 0 00:13:59.839 [2024-12-10 05:38:47.552015] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26129: invalid cntlid range [0-65519] 00:13:59.839 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:59.839 { 00:13:59.839 "nqn": "nqn.2016-06.io.spdk:cnode26129", 00:13:59.839 "min_cntlid": 0, 00:13:59.839 "method": "nvmf_create_subsystem", 00:13:59.839 "req_id": 1 00:13:59.839 } 00:13:59.839 Got JSON-RPC error response 00:13:59.839 response: 00:13:59.839 { 00:13:59.839 "code": -32602, 00:13:59.839 "message": "Invalid cntlid range [0-65519]" 00:13:59.839 }' 00:13:59.839 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:59.839 { 00:13:59.839 "nqn": "nqn.2016-06.io.spdk:cnode26129", 00:13:59.839 "min_cntlid": 0, 00:13:59.839 "method": "nvmf_create_subsystem", 00:13:59.839 "req_id": 1 00:13:59.839 } 00:13:59.839 Got JSON-RPC error response 00:13:59.839 response: 00:13:59.839 { 00:13:59.839 "code": -32602, 00:13:59.839 "message": "Invalid cntlid range [0-65519]" 00:13:59.839 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:59.839 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8839 -i 65520 00:14:00.096 [2024-12-10 05:38:47.756697] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8839: invalid cntlid range [65520-65519] 00:14:00.096 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:00.096 { 00:14:00.096 "nqn": "nqn.2016-06.io.spdk:cnode8839", 00:14:00.096 "min_cntlid": 65520, 00:14:00.096 "method": "nvmf_create_subsystem", 00:14:00.096 "req_id": 1 00:14:00.096 } 00:14:00.096 Got JSON-RPC error response 00:14:00.096 response: 00:14:00.096 { 00:14:00.096 "code": -32602, 00:14:00.096 "message": "Invalid cntlid range [65520-65519]" 00:14:00.096 }' 00:14:00.096 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:00.096 { 00:14:00.096 "nqn": "nqn.2016-06.io.spdk:cnode8839", 00:14:00.096 "min_cntlid": 65520, 00:14:00.096 "method": "nvmf_create_subsystem", 00:14:00.096 "req_id": 1 00:14:00.096 } 00:14:00.096 Got JSON-RPC error response 00:14:00.096 response: 00:14:00.096 { 00:14:00.096 "code": -32602, 00:14:00.096 "message": "Invalid cntlid range [65520-65519]" 00:14:00.096 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:00.096 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4220 -I 0 00:14:00.096 [2024-12-10 05:38:47.969478] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4220: invalid cntlid range [1-0] 00:14:00.354 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:00.354 { 00:14:00.354 "nqn": "nqn.2016-06.io.spdk:cnode4220", 00:14:00.354 "max_cntlid": 0, 00:14:00.354 "method": "nvmf_create_subsystem", 00:14:00.354 "req_id": 1 00:14:00.354 } 00:14:00.354 Got JSON-RPC error response 00:14:00.354 response: 00:14:00.354 { 00:14:00.354 "code": -32602, 00:14:00.354 "message": "Invalid cntlid range [1-0]" 00:14:00.354 }' 00:14:00.354 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:00.354 { 00:14:00.354 "nqn": "nqn.2016-06.io.spdk:cnode4220", 00:14:00.354 "max_cntlid": 0, 00:14:00.354 "method": "nvmf_create_subsystem", 00:14:00.354 "req_id": 1 00:14:00.354 } 00:14:00.354 Got JSON-RPC error response 00:14:00.354 response: 00:14:00.354 { 00:14:00.354 "code": -32602, 00:14:00.354 "message": "Invalid cntlid range [1-0]" 00:14:00.354 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:00.354 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28213 -I 65520 00:14:00.354 [2024-12-10 05:38:48.186218] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28213: invalid cntlid range [1-65520] 00:14:00.354 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:00.354 { 00:14:00.354 "nqn": "nqn.2016-06.io.spdk:cnode28213", 00:14:00.354 "max_cntlid": 65520, 00:14:00.354 "method": "nvmf_create_subsystem", 00:14:00.354 "req_id": 1 00:14:00.354 } 00:14:00.354 Got JSON-RPC error response 00:14:00.354 response: 00:14:00.354 { 00:14:00.354 "code": -32602, 00:14:00.354 "message": "Invalid cntlid range [1-65520]" 00:14:00.354 }' 00:14:00.354 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:00.354 { 00:14:00.354 "nqn": "nqn.2016-06.io.spdk:cnode28213", 00:14:00.354 "max_cntlid": 65520, 00:14:00.354 "method": "nvmf_create_subsystem", 00:14:00.354 "req_id": 1 00:14:00.354 } 00:14:00.354 Got JSON-RPC error response 00:14:00.354 response: 00:14:00.354 { 00:14:00.354 "code": -32602, 00:14:00.354 "message": "Invalid cntlid range [1-65520]" 00:14:00.354 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:00.354 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17007 -i 6 -I 5 00:14:00.612 [2024-12-10 05:38:48.382909] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17007: invalid cntlid range [6-5] 00:14:00.612 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:00.612 { 00:14:00.612 "nqn": "nqn.2016-06.io.spdk:cnode17007", 00:14:00.612 "min_cntlid": 6, 00:14:00.612 "max_cntlid": 5, 00:14:00.612 "method": "nvmf_create_subsystem", 00:14:00.612 "req_id": 1 00:14:00.612 } 00:14:00.612 Got JSON-RPC error response 00:14:00.612 response: 00:14:00.612 { 00:14:00.612 "code": -32602, 00:14:00.612 "message": "Invalid cntlid range [6-5]" 00:14:00.612 }' 00:14:00.612 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:00.612 { 00:14:00.612 "nqn": "nqn.2016-06.io.spdk:cnode17007", 00:14:00.612 "min_cntlid": 6, 00:14:00.612 "max_cntlid": 5, 00:14:00.612 "method": "nvmf_create_subsystem", 00:14:00.612 "req_id": 1 00:14:00.612 } 00:14:00.612 Got JSON-RPC error response 00:14:00.612 response: 00:14:00.612 { 00:14:00.612 "code": -32602, 00:14:00.612 "message": "Invalid cntlid range [6-5]" 00:14:00.612 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:00.612 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:00.870 { 00:14:00.870 "name": "foobar", 00:14:00.870 "method": "nvmf_delete_target", 00:14:00.870 "req_id": 1 00:14:00.870 } 00:14:00.870 Got JSON-RPC error response 00:14:00.870 response: 00:14:00.870 { 00:14:00.870 "code": -32602, 00:14:00.870 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:00.870 }' 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:00.870 { 00:14:00.870 "name": "foobar", 00:14:00.870 "method": "nvmf_delete_target", 00:14:00.870 "req_id": 1 00:14:00.870 } 00:14:00.870 Got JSON-RPC error response 00:14:00.870 response: 00:14:00.870 { 00:14:00.870 "code": -32602, 00:14:00.870 "message": "The specified target doesn't exist, cannot delete it." 00:14:00.870 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:00.870 rmmod nvme_tcp 00:14:00.870 rmmod nvme_fabrics 00:14:00.870 rmmod nvme_keyring 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1132748 ']' 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1132748 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1132748 ']' 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1132748 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1132748 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1132748' 00:14:00.870 killing process with pid 1132748 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1132748 00:14:00.870 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1132748 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.130 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.035 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:03.035 00:14:03.035 real 0m11.976s 00:14:03.035 user 0m18.534s 00:14:03.035 sys 0m5.343s 00:14:03.035 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.035 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:03.035 ************************************ 00:14:03.035 END TEST nvmf_invalid 00:14:03.035 ************************************ 00:14:03.035 05:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:03.035 05:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:03.035 05:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.035 05:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:03.294 ************************************ 00:14:03.294 START TEST nvmf_connect_stress 00:14:03.294 ************************************ 00:14:03.294 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:03.294 * Looking for test storage... 00:14:03.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:03.294 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:03.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.295 --rc genhtml_branch_coverage=1 00:14:03.295 --rc genhtml_function_coverage=1 00:14:03.295 --rc genhtml_legend=1 00:14:03.295 --rc geninfo_all_blocks=1 00:14:03.295 --rc geninfo_unexecuted_blocks=1 00:14:03.295 00:14:03.295 ' 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:03.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.295 --rc genhtml_branch_coverage=1 00:14:03.295 --rc genhtml_function_coverage=1 00:14:03.295 --rc genhtml_legend=1 00:14:03.295 --rc geninfo_all_blocks=1 00:14:03.295 --rc geninfo_unexecuted_blocks=1 00:14:03.295 00:14:03.295 ' 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:03.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.295 --rc genhtml_branch_coverage=1 00:14:03.295 --rc genhtml_function_coverage=1 00:14:03.295 --rc genhtml_legend=1 00:14:03.295 --rc geninfo_all_blocks=1 00:14:03.295 --rc geninfo_unexecuted_blocks=1 00:14:03.295 00:14:03.295 ' 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:03.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.295 --rc genhtml_branch_coverage=1 00:14:03.295 --rc genhtml_function_coverage=1 00:14:03.295 --rc genhtml_legend=1 00:14:03.295 --rc geninfo_all_blocks=1 00:14:03.295 --rc geninfo_unexecuted_blocks=1 00:14:03.295 00:14:03.295 ' 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:03.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.295 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:03.296 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:03.296 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:03.296 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.863 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.863 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:09.863 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:09.863 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:09.863 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:09.863 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:09.864 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:09.864 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:09.864 Found net devices under 0000:af:00.0: cvl_0_0 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:09.864 Found net devices under 0000:af:00.1: cvl_0_1 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.864 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.864 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.864 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:09.864 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:09.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:14:09.864 00:14:09.865 --- 10.0.0.2 ping statistics --- 00:14:09.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.865 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:14:09.865 00:14:09.865 --- 10.0.0.1 ping statistics --- 00:14:09.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.865 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1137046 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1137046 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1137046 ']' 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.865 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.865 [2024-12-10 05:38:57.127743] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:14:09.865 [2024-12-10 05:38:57.127794] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.865 [2024-12-10 05:38:57.208530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.865 [2024-12-10 05:38:57.248676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.865 [2024-12-10 05:38:57.248711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.865 [2024-12-10 05:38:57.248719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.865 [2024-12-10 05:38:57.248725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.865 [2024-12-10 05:38:57.248730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.865 [2024-12-10 05:38:57.249902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.865 [2024-12-10 05:38:57.249921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.865 [2024-12-10 05:38:57.253181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.123 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.123 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:10.123 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.123 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.123 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.382 [2024-12-10 05:38:58.038428] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.382 [2024-12-10 05:38:58.058599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.382 NULL1 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1137288 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.382 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.383 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.641 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.641 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:10.641 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.641 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.641 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.206 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.206 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:11.206 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.206 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.206 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.463 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.463 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:11.463 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.463 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.463 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.721 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.721 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:11.721 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.721 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.721 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.979 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.979 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:11.979 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.979 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.979 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.237 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.237 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:12.237 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.237 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.238 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.803 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.803 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:12.803 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.803 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.803 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.060 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.060 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:13.060 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.060 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.060 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.317 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.317 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:13.317 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.317 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.317 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.575 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.575 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:13.575 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.575 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.575 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.141 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.141 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:14.141 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.141 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.141 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.400 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.400 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:14.400 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.400 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.400 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.658 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.658 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:14.658 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.658 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.658 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.916 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.916 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:14.916 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.916 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.916 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.174 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.174 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:15.174 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.174 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.174 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.740 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.740 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:15.740 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.740 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.740 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.998 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.998 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:15.998 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.998 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.998 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.259 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.259 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:16.259 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.259 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.259 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.517 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.517 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:16.517 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.517 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.517 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.775 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.775 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:16.775 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.775 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.775 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.342 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.342 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:17.342 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.342 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.342 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.600 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.600 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:17.600 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.600 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.600 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.858 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.858 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:17.858 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.858 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.858 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.116 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.116 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:18.116 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.116 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.116 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.682 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.682 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:18.682 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.682 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.682 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.940 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.940 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:18.940 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.940 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.940 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.198 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.198 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:19.198 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.198 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.198 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.456 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.456 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:19.456 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.456 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.456 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.715 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.715 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:19.715 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.715 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.715 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.282 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.282 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:20.282 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.282 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.282 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.541 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.541 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:20.541 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.541 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.541 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.541 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1137288 00:14:20.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1137288) - No such process 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1137288 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:20.799 rmmod nvme_tcp 00:14:20.799 rmmod nvme_fabrics 00:14:20.799 rmmod nvme_keyring 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1137046 ']' 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1137046 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1137046 ']' 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1137046 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.799 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1137046 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1137046' 00:14:21.059 killing process with pid 1137046 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1137046 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1137046 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.059 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.598 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:23.598 00:14:23.598 real 0m19.996s 00:14:23.598 user 0m42.618s 00:14:23.598 sys 0m8.509s 00:14:23.598 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.598 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.598 ************************************ 00:14:23.598 END TEST nvmf_connect_stress 00:14:23.598 ************************************ 00:14:23.598 05:39:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:23.598 05:39:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:23.598 05:39:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.598 05:39:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:23.598 ************************************ 00:14:23.598 START TEST nvmf_fused_ordering 00:14:23.598 ************************************ 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:23.598 * Looking for test storage... 00:14:23.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:23.598 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.599 --rc genhtml_branch_coverage=1 00:14:23.599 --rc genhtml_function_coverage=1 00:14:23.599 --rc genhtml_legend=1 00:14:23.599 --rc geninfo_all_blocks=1 00:14:23.599 --rc geninfo_unexecuted_blocks=1 00:14:23.599 00:14:23.599 ' 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.599 --rc genhtml_branch_coverage=1 00:14:23.599 --rc genhtml_function_coverage=1 00:14:23.599 --rc genhtml_legend=1 00:14:23.599 --rc geninfo_all_blocks=1 00:14:23.599 --rc geninfo_unexecuted_blocks=1 00:14:23.599 00:14:23.599 ' 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.599 --rc genhtml_branch_coverage=1 00:14:23.599 --rc genhtml_function_coverage=1 00:14:23.599 --rc genhtml_legend=1 00:14:23.599 --rc geninfo_all_blocks=1 00:14:23.599 --rc geninfo_unexecuted_blocks=1 00:14:23.599 00:14:23.599 ' 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.599 --rc genhtml_branch_coverage=1 00:14:23.599 --rc genhtml_function_coverage=1 00:14:23.599 --rc genhtml_legend=1 00:14:23.599 --rc geninfo_all_blocks=1 00:14:23.599 --rc geninfo_unexecuted_blocks=1 00:14:23.599 00:14:23.599 ' 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:23.599 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:30.196 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:30.196 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:30.196 Found net devices under 0000:af:00.0: cvl_0_0 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:30.196 Found net devices under 0000:af:00.1: cvl_0_1 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:30.196 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:30.197 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:30.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:14:30.197 00:14:30.197 --- 10.0.0.2 ping statistics --- 00:14:30.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.197 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:14:30.197 00:14:30.197 --- 10.0.0.1 ping statistics --- 00:14:30.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.197 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1143026 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1143026 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1143026 ']' 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.197 [2024-12-10 05:39:17.178576] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:14:30.197 [2024-12-10 05:39:17.178629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.197 [2024-12-10 05:39:17.255727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.197 [2024-12-10 05:39:17.295567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.197 [2024-12-10 05:39:17.295603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.197 [2024-12-10 05:39:17.295610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.197 [2024-12-10 05:39:17.295615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.197 [2024-12-10 05:39:17.295621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.197 [2024-12-10 05:39:17.296103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.197 [2024-12-10 05:39:17.432012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.197 [2024-12-10 05:39:17.452191] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.197 NULL1 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.197 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:30.197 [2024-12-10 05:39:17.511289] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:14:30.197 [2024-12-10 05:39:17.511332] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143092 ] 00:14:30.197 Attached to nqn.2016-06.io.spdk:cnode1 00:14:30.197 Namespace ID: 1 size: 1GB 00:14:30.197 fused_ordering(0) 00:14:30.197 fused_ordering(1) 00:14:30.197 fused_ordering(2) 00:14:30.198 fused_ordering(3) 00:14:30.198 fused_ordering(4) 00:14:30.198 fused_ordering(5) 00:14:30.198 fused_ordering(6) 00:14:30.198 fused_ordering(7) 00:14:30.198 fused_ordering(8) 00:14:30.198 fused_ordering(9) 00:14:30.198 fused_ordering(10) 00:14:30.198 fused_ordering(11) 00:14:30.198 fused_ordering(12) 00:14:30.198 fused_ordering(13) 00:14:30.198 fused_ordering(14) 00:14:30.198 fused_ordering(15) 00:14:30.198 fused_ordering(16) 00:14:30.198 fused_ordering(17) 00:14:30.198 fused_ordering(18) 00:14:30.198 fused_ordering(19) 00:14:30.198 fused_ordering(20) 00:14:30.198 fused_ordering(21) 00:14:30.198 fused_ordering(22) 00:14:30.198 fused_ordering(23) 00:14:30.198 fused_ordering(24) 00:14:30.198 fused_ordering(25) 00:14:30.198 fused_ordering(26) 00:14:30.198 fused_ordering(27) 00:14:30.198 fused_ordering(28) 00:14:30.198 fused_ordering(29) 00:14:30.198 fused_ordering(30) 00:14:30.198 fused_ordering(31) 00:14:30.198 fused_ordering(32) 00:14:30.198 fused_ordering(33) 00:14:30.198 fused_ordering(34) 00:14:30.198 fused_ordering(35) 00:14:30.198 fused_ordering(36) 00:14:30.198 fused_ordering(37) 00:14:30.198 fused_ordering(38) 00:14:30.198 fused_ordering(39) 00:14:30.198 fused_ordering(40) 00:14:30.198 fused_ordering(41) 00:14:30.198 fused_ordering(42) 00:14:30.198 fused_ordering(43) 00:14:30.198 fused_ordering(44) 00:14:30.198 fused_ordering(45) 00:14:30.198 fused_ordering(46) 00:14:30.198 fused_ordering(47) 00:14:30.198 fused_ordering(48) 00:14:30.198 fused_ordering(49) 00:14:30.198 fused_ordering(50) 00:14:30.198 fused_ordering(51) 00:14:30.198 fused_ordering(52) 00:14:30.198 fused_ordering(53) 00:14:30.198 fused_ordering(54) 00:14:30.198 fused_ordering(55) 00:14:30.198 fused_ordering(56) 00:14:30.198 fused_ordering(57) 00:14:30.198 fused_ordering(58) 00:14:30.198 fused_ordering(59) 00:14:30.198 fused_ordering(60) 00:14:30.198 fused_ordering(61) 00:14:30.198 fused_ordering(62) 00:14:30.198 fused_ordering(63) 00:14:30.198 fused_ordering(64) 00:14:30.198 fused_ordering(65) 00:14:30.198 fused_ordering(66) 00:14:30.198 fused_ordering(67) 00:14:30.198 fused_ordering(68) 00:14:30.198 fused_ordering(69) 00:14:30.198 fused_ordering(70) 00:14:30.198 fused_ordering(71) 00:14:30.198 fused_ordering(72) 00:14:30.198 fused_ordering(73) 00:14:30.198 fused_ordering(74) 00:14:30.198 fused_ordering(75) 00:14:30.198 fused_ordering(76) 00:14:30.198 fused_ordering(77) 00:14:30.198 fused_ordering(78) 00:14:30.198 fused_ordering(79) 00:14:30.198 fused_ordering(80) 00:14:30.198 fused_ordering(81) 00:14:30.198 fused_ordering(82) 00:14:30.198 fused_ordering(83) 00:14:30.198 fused_ordering(84) 00:14:30.198 fused_ordering(85) 00:14:30.198 fused_ordering(86) 00:14:30.198 fused_ordering(87) 00:14:30.198 fused_ordering(88) 00:14:30.198 fused_ordering(89) 00:14:30.198 fused_ordering(90) 00:14:30.198 fused_ordering(91) 00:14:30.198 fused_ordering(92) 00:14:30.198 fused_ordering(93) 00:14:30.198 fused_ordering(94) 00:14:30.198 fused_ordering(95) 00:14:30.198 fused_ordering(96) 00:14:30.198 fused_ordering(97) 00:14:30.198 fused_ordering(98) 00:14:30.198 fused_ordering(99) 00:14:30.198 fused_ordering(100) 00:14:30.198 fused_ordering(101) 00:14:30.198 fused_ordering(102) 00:14:30.198 fused_ordering(103) 00:14:30.198 fused_ordering(104) 00:14:30.198 fused_ordering(105) 00:14:30.198 fused_ordering(106) 00:14:30.198 fused_ordering(107) 00:14:30.198 fused_ordering(108) 00:14:30.198 fused_ordering(109) 00:14:30.198 fused_ordering(110) 00:14:30.198 fused_ordering(111) 00:14:30.198 fused_ordering(112) 00:14:30.198 fused_ordering(113) 00:14:30.198 fused_ordering(114) 00:14:30.198 fused_ordering(115) 00:14:30.198 fused_ordering(116) 00:14:30.198 fused_ordering(117) 00:14:30.198 fused_ordering(118) 00:14:30.198 fused_ordering(119) 00:14:30.198 fused_ordering(120) 00:14:30.198 fused_ordering(121) 00:14:30.198 fused_ordering(122) 00:14:30.198 fused_ordering(123) 00:14:30.198 fused_ordering(124) 00:14:30.198 fused_ordering(125) 00:14:30.198 fused_ordering(126) 00:14:30.198 fused_ordering(127) 00:14:30.198 fused_ordering(128) 00:14:30.198 fused_ordering(129) 00:14:30.198 fused_ordering(130) 00:14:30.198 fused_ordering(131) 00:14:30.198 fused_ordering(132) 00:14:30.198 fused_ordering(133) 00:14:30.198 fused_ordering(134) 00:14:30.198 fused_ordering(135) 00:14:30.198 fused_ordering(136) 00:14:30.198 fused_ordering(137) 00:14:30.198 fused_ordering(138) 00:14:30.198 fused_ordering(139) 00:14:30.198 fused_ordering(140) 00:14:30.198 fused_ordering(141) 00:14:30.198 fused_ordering(142) 00:14:30.198 fused_ordering(143) 00:14:30.198 fused_ordering(144) 00:14:30.198 fused_ordering(145) 00:14:30.198 fused_ordering(146) 00:14:30.198 fused_ordering(147) 00:14:30.198 fused_ordering(148) 00:14:30.198 fused_ordering(149) 00:14:30.198 fused_ordering(150) 00:14:30.198 fused_ordering(151) 00:14:30.198 fused_ordering(152) 00:14:30.198 fused_ordering(153) 00:14:30.198 fused_ordering(154) 00:14:30.198 fused_ordering(155) 00:14:30.198 fused_ordering(156) 00:14:30.198 fused_ordering(157) 00:14:30.198 fused_ordering(158) 00:14:30.198 fused_ordering(159) 00:14:30.198 fused_ordering(160) 00:14:30.198 fused_ordering(161) 00:14:30.198 fused_ordering(162) 00:14:30.198 fused_ordering(163) 00:14:30.198 fused_ordering(164) 00:14:30.198 fused_ordering(165) 00:14:30.198 fused_ordering(166) 00:14:30.198 fused_ordering(167) 00:14:30.198 fused_ordering(168) 00:14:30.198 fused_ordering(169) 00:14:30.198 fused_ordering(170) 00:14:30.198 fused_ordering(171) 00:14:30.198 fused_ordering(172) 00:14:30.198 fused_ordering(173) 00:14:30.198 fused_ordering(174) 00:14:30.198 fused_ordering(175) 00:14:30.198 fused_ordering(176) 00:14:30.198 fused_ordering(177) 00:14:30.198 fused_ordering(178) 00:14:30.198 fused_ordering(179) 00:14:30.198 fused_ordering(180) 00:14:30.198 fused_ordering(181) 00:14:30.198 fused_ordering(182) 00:14:30.198 fused_ordering(183) 00:14:30.198 fused_ordering(184) 00:14:30.198 fused_ordering(185) 00:14:30.198 fused_ordering(186) 00:14:30.198 fused_ordering(187) 00:14:30.198 fused_ordering(188) 00:14:30.198 fused_ordering(189) 00:14:30.198 fused_ordering(190) 00:14:30.198 fused_ordering(191) 00:14:30.198 fused_ordering(192) 00:14:30.198 fused_ordering(193) 00:14:30.198 fused_ordering(194) 00:14:30.198 fused_ordering(195) 00:14:30.198 fused_ordering(196) 00:14:30.198 fused_ordering(197) 00:14:30.198 fused_ordering(198) 00:14:30.198 fused_ordering(199) 00:14:30.198 fused_ordering(200) 00:14:30.198 fused_ordering(201) 00:14:30.198 fused_ordering(202) 00:14:30.198 fused_ordering(203) 00:14:30.198 fused_ordering(204) 00:14:30.198 fused_ordering(205) 00:14:30.456 fused_ordering(206) 00:14:30.456 fused_ordering(207) 00:14:30.456 fused_ordering(208) 00:14:30.456 fused_ordering(209) 00:14:30.456 fused_ordering(210) 00:14:30.456 fused_ordering(211) 00:14:30.456 fused_ordering(212) 00:14:30.456 fused_ordering(213) 00:14:30.456 fused_ordering(214) 00:14:30.456 fused_ordering(215) 00:14:30.456 fused_ordering(216) 00:14:30.456 fused_ordering(217) 00:14:30.456 fused_ordering(218) 00:14:30.456 fused_ordering(219) 00:14:30.456 fused_ordering(220) 00:14:30.456 fused_ordering(221) 00:14:30.456 fused_ordering(222) 00:14:30.456 fused_ordering(223) 00:14:30.456 fused_ordering(224) 00:14:30.456 fused_ordering(225) 00:14:30.456 fused_ordering(226) 00:14:30.456 fused_ordering(227) 00:14:30.456 fused_ordering(228) 00:14:30.456 fused_ordering(229) 00:14:30.456 fused_ordering(230) 00:14:30.456 fused_ordering(231) 00:14:30.456 fused_ordering(232) 00:14:30.456 fused_ordering(233) 00:14:30.456 fused_ordering(234) 00:14:30.456 fused_ordering(235) 00:14:30.456 fused_ordering(236) 00:14:30.456 fused_ordering(237) 00:14:30.456 fused_ordering(238) 00:14:30.456 fused_ordering(239) 00:14:30.456 fused_ordering(240) 00:14:30.456 fused_ordering(241) 00:14:30.456 fused_ordering(242) 00:14:30.456 fused_ordering(243) 00:14:30.456 fused_ordering(244) 00:14:30.456 fused_ordering(245) 00:14:30.456 fused_ordering(246) 00:14:30.456 fused_ordering(247) 00:14:30.456 fused_ordering(248) 00:14:30.456 fused_ordering(249) 00:14:30.456 fused_ordering(250) 00:14:30.456 fused_ordering(251) 00:14:30.456 fused_ordering(252) 00:14:30.456 fused_ordering(253) 00:14:30.456 fused_ordering(254) 00:14:30.456 fused_ordering(255) 00:14:30.456 fused_ordering(256) 00:14:30.456 fused_ordering(257) 00:14:30.456 fused_ordering(258) 00:14:30.456 fused_ordering(259) 00:14:30.456 fused_ordering(260) 00:14:30.456 fused_ordering(261) 00:14:30.456 fused_ordering(262) 00:14:30.456 fused_ordering(263) 00:14:30.456 fused_ordering(264) 00:14:30.456 fused_ordering(265) 00:14:30.456 fused_ordering(266) 00:14:30.456 fused_ordering(267) 00:14:30.456 fused_ordering(268) 00:14:30.456 fused_ordering(269) 00:14:30.456 fused_ordering(270) 00:14:30.456 fused_ordering(271) 00:14:30.456 fused_ordering(272) 00:14:30.456 fused_ordering(273) 00:14:30.456 fused_ordering(274) 00:14:30.456 fused_ordering(275) 00:14:30.456 fused_ordering(276) 00:14:30.456 fused_ordering(277) 00:14:30.456 fused_ordering(278) 00:14:30.456 fused_ordering(279) 00:14:30.456 fused_ordering(280) 00:14:30.456 fused_ordering(281) 00:14:30.456 fused_ordering(282) 00:14:30.456 fused_ordering(283) 00:14:30.456 fused_ordering(284) 00:14:30.456 fused_ordering(285) 00:14:30.456 fused_ordering(286) 00:14:30.456 fused_ordering(287) 00:14:30.456 fused_ordering(288) 00:14:30.456 fused_ordering(289) 00:14:30.456 fused_ordering(290) 00:14:30.456 fused_ordering(291) 00:14:30.456 fused_ordering(292) 00:14:30.456 fused_ordering(293) 00:14:30.456 fused_ordering(294) 00:14:30.456 fused_ordering(295) 00:14:30.456 fused_ordering(296) 00:14:30.456 fused_ordering(297) 00:14:30.456 fused_ordering(298) 00:14:30.456 fused_ordering(299) 00:14:30.456 fused_ordering(300) 00:14:30.456 fused_ordering(301) 00:14:30.457 fused_ordering(302) 00:14:30.457 fused_ordering(303) 00:14:30.457 fused_ordering(304) 00:14:30.457 fused_ordering(305) 00:14:30.457 fused_ordering(306) 00:14:30.457 fused_ordering(307) 00:14:30.457 fused_ordering(308) 00:14:30.457 fused_ordering(309) 00:14:30.457 fused_ordering(310) 00:14:30.457 fused_ordering(311) 00:14:30.457 fused_ordering(312) 00:14:30.457 fused_ordering(313) 00:14:30.457 fused_ordering(314) 00:14:30.457 fused_ordering(315) 00:14:30.457 fused_ordering(316) 00:14:30.457 fused_ordering(317) 00:14:30.457 fused_ordering(318) 00:14:30.457 fused_ordering(319) 00:14:30.457 fused_ordering(320) 00:14:30.457 fused_ordering(321) 00:14:30.457 fused_ordering(322) 00:14:30.457 fused_ordering(323) 00:14:30.457 fused_ordering(324) 00:14:30.457 fused_ordering(325) 00:14:30.457 fused_ordering(326) 00:14:30.457 fused_ordering(327) 00:14:30.457 fused_ordering(328) 00:14:30.457 fused_ordering(329) 00:14:30.457 fused_ordering(330) 00:14:30.457 fused_ordering(331) 00:14:30.457 fused_ordering(332) 00:14:30.457 fused_ordering(333) 00:14:30.457 fused_ordering(334) 00:14:30.457 fused_ordering(335) 00:14:30.457 fused_ordering(336) 00:14:30.457 fused_ordering(337) 00:14:30.457 fused_ordering(338) 00:14:30.457 fused_ordering(339) 00:14:30.457 fused_ordering(340) 00:14:30.457 fused_ordering(341) 00:14:30.457 fused_ordering(342) 00:14:30.457 fused_ordering(343) 00:14:30.457 fused_ordering(344) 00:14:30.457 fused_ordering(345) 00:14:30.457 fused_ordering(346) 00:14:30.457 fused_ordering(347) 00:14:30.457 fused_ordering(348) 00:14:30.457 fused_ordering(349) 00:14:30.457 fused_ordering(350) 00:14:30.457 fused_ordering(351) 00:14:30.457 fused_ordering(352) 00:14:30.457 fused_ordering(353) 00:14:30.457 fused_ordering(354) 00:14:30.457 fused_ordering(355) 00:14:30.457 fused_ordering(356) 00:14:30.457 fused_ordering(357) 00:14:30.457 fused_ordering(358) 00:14:30.457 fused_ordering(359) 00:14:30.457 fused_ordering(360) 00:14:30.457 fused_ordering(361) 00:14:30.457 fused_ordering(362) 00:14:30.457 fused_ordering(363) 00:14:30.457 fused_ordering(364) 00:14:30.457 fused_ordering(365) 00:14:30.457 fused_ordering(366) 00:14:30.457 fused_ordering(367) 00:14:30.457 fused_ordering(368) 00:14:30.457 fused_ordering(369) 00:14:30.457 fused_ordering(370) 00:14:30.457 fused_ordering(371) 00:14:30.457 fused_ordering(372) 00:14:30.457 fused_ordering(373) 00:14:30.457 fused_ordering(374) 00:14:30.457 fused_ordering(375) 00:14:30.457 fused_ordering(376) 00:14:30.457 fused_ordering(377) 00:14:30.457 fused_ordering(378) 00:14:30.457 fused_ordering(379) 00:14:30.457 fused_ordering(380) 00:14:30.457 fused_ordering(381) 00:14:30.457 fused_ordering(382) 00:14:30.457 fused_ordering(383) 00:14:30.457 fused_ordering(384) 00:14:30.457 fused_ordering(385) 00:14:30.457 fused_ordering(386) 00:14:30.457 fused_ordering(387) 00:14:30.457 fused_ordering(388) 00:14:30.457 fused_ordering(389) 00:14:30.457 fused_ordering(390) 00:14:30.457 fused_ordering(391) 00:14:30.457 fused_ordering(392) 00:14:30.457 fused_ordering(393) 00:14:30.457 fused_ordering(394) 00:14:30.457 fused_ordering(395) 00:14:30.457 fused_ordering(396) 00:14:30.457 fused_ordering(397) 00:14:30.457 fused_ordering(398) 00:14:30.457 fused_ordering(399) 00:14:30.457 fused_ordering(400) 00:14:30.457 fused_ordering(401) 00:14:30.457 fused_ordering(402) 00:14:30.457 fused_ordering(403) 00:14:30.457 fused_ordering(404) 00:14:30.457 fused_ordering(405) 00:14:30.457 fused_ordering(406) 00:14:30.457 fused_ordering(407) 00:14:30.457 fused_ordering(408) 00:14:30.457 fused_ordering(409) 00:14:30.457 fused_ordering(410) 00:14:30.715 fused_ordering(411) 00:14:30.715 fused_ordering(412) 00:14:30.715 fused_ordering(413) 00:14:30.715 fused_ordering(414) 00:14:30.715 fused_ordering(415) 00:14:30.715 fused_ordering(416) 00:14:30.715 fused_ordering(417) 00:14:30.715 fused_ordering(418) 00:14:30.715 fused_ordering(419) 00:14:30.715 fused_ordering(420) 00:14:30.715 fused_ordering(421) 00:14:30.715 fused_ordering(422) 00:14:30.715 fused_ordering(423) 00:14:30.715 fused_ordering(424) 00:14:30.715 fused_ordering(425) 00:14:30.715 fused_ordering(426) 00:14:30.715 fused_ordering(427) 00:14:30.715 fused_ordering(428) 00:14:30.715 fused_ordering(429) 00:14:30.715 fused_ordering(430) 00:14:30.715 fused_ordering(431) 00:14:30.715 fused_ordering(432) 00:14:30.715 fused_ordering(433) 00:14:30.715 fused_ordering(434) 00:14:30.715 fused_ordering(435) 00:14:30.715 fused_ordering(436) 00:14:30.715 fused_ordering(437) 00:14:30.715 fused_ordering(438) 00:14:30.715 fused_ordering(439) 00:14:30.715 fused_ordering(440) 00:14:30.715 fused_ordering(441) 00:14:30.715 fused_ordering(442) 00:14:30.715 fused_ordering(443) 00:14:30.715 fused_ordering(444) 00:14:30.715 fused_ordering(445) 00:14:30.715 fused_ordering(446) 00:14:30.715 fused_ordering(447) 00:14:30.715 fused_ordering(448) 00:14:30.715 fused_ordering(449) 00:14:30.715 fused_ordering(450) 00:14:30.715 fused_ordering(451) 00:14:30.715 fused_ordering(452) 00:14:30.715 fused_ordering(453) 00:14:30.715 fused_ordering(454) 00:14:30.715 fused_ordering(455) 00:14:30.715 fused_ordering(456) 00:14:30.715 fused_ordering(457) 00:14:30.715 fused_ordering(458) 00:14:30.715 fused_ordering(459) 00:14:30.715 fused_ordering(460) 00:14:30.715 fused_ordering(461) 00:14:30.715 fused_ordering(462) 00:14:30.715 fused_ordering(463) 00:14:30.715 fused_ordering(464) 00:14:30.715 fused_ordering(465) 00:14:30.715 fused_ordering(466) 00:14:30.715 fused_ordering(467) 00:14:30.715 fused_ordering(468) 00:14:30.715 fused_ordering(469) 00:14:30.715 fused_ordering(470) 00:14:30.715 fused_ordering(471) 00:14:30.715 fused_ordering(472) 00:14:30.715 fused_ordering(473) 00:14:30.715 fused_ordering(474) 00:14:30.715 fused_ordering(475) 00:14:30.715 fused_ordering(476) 00:14:30.715 fused_ordering(477) 00:14:30.715 fused_ordering(478) 00:14:30.715 fused_ordering(479) 00:14:30.715 fused_ordering(480) 00:14:30.715 fused_ordering(481) 00:14:30.715 fused_ordering(482) 00:14:30.715 fused_ordering(483) 00:14:30.715 fused_ordering(484) 00:14:30.715 fused_ordering(485) 00:14:30.715 fused_ordering(486) 00:14:30.715 fused_ordering(487) 00:14:30.715 fused_ordering(488) 00:14:30.715 fused_ordering(489) 00:14:30.715 fused_ordering(490) 00:14:30.715 fused_ordering(491) 00:14:30.715 fused_ordering(492) 00:14:30.715 fused_ordering(493) 00:14:30.715 fused_ordering(494) 00:14:30.715 fused_ordering(495) 00:14:30.715 fused_ordering(496) 00:14:30.715 fused_ordering(497) 00:14:30.715 fused_ordering(498) 00:14:30.715 fused_ordering(499) 00:14:30.715 fused_ordering(500) 00:14:30.715 fused_ordering(501) 00:14:30.715 fused_ordering(502) 00:14:30.715 fused_ordering(503) 00:14:30.715 fused_ordering(504) 00:14:30.715 fused_ordering(505) 00:14:30.715 fused_ordering(506) 00:14:30.715 fused_ordering(507) 00:14:30.715 fused_ordering(508) 00:14:30.715 fused_ordering(509) 00:14:30.715 fused_ordering(510) 00:14:30.715 fused_ordering(511) 00:14:30.715 fused_ordering(512) 00:14:30.715 fused_ordering(513) 00:14:30.715 fused_ordering(514) 00:14:30.715 fused_ordering(515) 00:14:30.715 fused_ordering(516) 00:14:30.715 fused_ordering(517) 00:14:30.715 fused_ordering(518) 00:14:30.715 fused_ordering(519) 00:14:30.715 fused_ordering(520) 00:14:30.715 fused_ordering(521) 00:14:30.715 fused_ordering(522) 00:14:30.715 fused_ordering(523) 00:14:30.715 fused_ordering(524) 00:14:30.715 fused_ordering(525) 00:14:30.715 fused_ordering(526) 00:14:30.715 fused_ordering(527) 00:14:30.715 fused_ordering(528) 00:14:30.715 fused_ordering(529) 00:14:30.715 fused_ordering(530) 00:14:30.715 fused_ordering(531) 00:14:30.715 fused_ordering(532) 00:14:30.715 fused_ordering(533) 00:14:30.715 fused_ordering(534) 00:14:30.715 fused_ordering(535) 00:14:30.715 fused_ordering(536) 00:14:30.715 fused_ordering(537) 00:14:30.715 fused_ordering(538) 00:14:30.715 fused_ordering(539) 00:14:30.715 fused_ordering(540) 00:14:30.715 fused_ordering(541) 00:14:30.715 fused_ordering(542) 00:14:30.715 fused_ordering(543) 00:14:30.715 fused_ordering(544) 00:14:30.715 fused_ordering(545) 00:14:30.715 fused_ordering(546) 00:14:30.715 fused_ordering(547) 00:14:30.715 fused_ordering(548) 00:14:30.715 fused_ordering(549) 00:14:30.715 fused_ordering(550) 00:14:30.715 fused_ordering(551) 00:14:30.715 fused_ordering(552) 00:14:30.715 fused_ordering(553) 00:14:30.715 fused_ordering(554) 00:14:30.715 fused_ordering(555) 00:14:30.715 fused_ordering(556) 00:14:30.715 fused_ordering(557) 00:14:30.715 fused_ordering(558) 00:14:30.715 fused_ordering(559) 00:14:30.715 fused_ordering(560) 00:14:30.715 fused_ordering(561) 00:14:30.715 fused_ordering(562) 00:14:30.715 fused_ordering(563) 00:14:30.715 fused_ordering(564) 00:14:30.715 fused_ordering(565) 00:14:30.715 fused_ordering(566) 00:14:30.715 fused_ordering(567) 00:14:30.715 fused_ordering(568) 00:14:30.715 fused_ordering(569) 00:14:30.715 fused_ordering(570) 00:14:30.715 fused_ordering(571) 00:14:30.715 fused_ordering(572) 00:14:30.715 fused_ordering(573) 00:14:30.715 fused_ordering(574) 00:14:30.715 fused_ordering(575) 00:14:30.715 fused_ordering(576) 00:14:30.715 fused_ordering(577) 00:14:30.715 fused_ordering(578) 00:14:30.715 fused_ordering(579) 00:14:30.715 fused_ordering(580) 00:14:30.716 fused_ordering(581) 00:14:30.716 fused_ordering(582) 00:14:30.716 fused_ordering(583) 00:14:30.716 fused_ordering(584) 00:14:30.716 fused_ordering(585) 00:14:30.716 fused_ordering(586) 00:14:30.716 fused_ordering(587) 00:14:30.716 fused_ordering(588) 00:14:30.716 fused_ordering(589) 00:14:30.716 fused_ordering(590) 00:14:30.716 fused_ordering(591) 00:14:30.716 fused_ordering(592) 00:14:30.716 fused_ordering(593) 00:14:30.716 fused_ordering(594) 00:14:30.716 fused_ordering(595) 00:14:30.716 fused_ordering(596) 00:14:30.716 fused_ordering(597) 00:14:30.716 fused_ordering(598) 00:14:30.716 fused_ordering(599) 00:14:30.716 fused_ordering(600) 00:14:30.716 fused_ordering(601) 00:14:30.716 fused_ordering(602) 00:14:30.716 fused_ordering(603) 00:14:30.716 fused_ordering(604) 00:14:30.716 fused_ordering(605) 00:14:30.716 fused_ordering(606) 00:14:30.716 fused_ordering(607) 00:14:30.716 fused_ordering(608) 00:14:30.716 fused_ordering(609) 00:14:30.716 fused_ordering(610) 00:14:30.716 fused_ordering(611) 00:14:30.716 fused_ordering(612) 00:14:30.716 fused_ordering(613) 00:14:30.716 fused_ordering(614) 00:14:30.716 fused_ordering(615) 00:14:30.974 fused_ordering(616) 00:14:30.974 fused_ordering(617) 00:14:30.974 fused_ordering(618) 00:14:30.974 fused_ordering(619) 00:14:30.974 fused_ordering(620) 00:14:30.974 fused_ordering(621) 00:14:30.974 fused_ordering(622) 00:14:30.974 fused_ordering(623) 00:14:30.974 fused_ordering(624) 00:14:30.974 fused_ordering(625) 00:14:30.974 fused_ordering(626) 00:14:30.974 fused_ordering(627) 00:14:30.974 fused_ordering(628) 00:14:30.974 fused_ordering(629) 00:14:30.974 fused_ordering(630) 00:14:30.974 fused_ordering(631) 00:14:30.974 fused_ordering(632) 00:14:30.974 fused_ordering(633) 00:14:30.974 fused_ordering(634) 00:14:30.974 fused_ordering(635) 00:14:30.974 fused_ordering(636) 00:14:30.974 fused_ordering(637) 00:14:30.974 fused_ordering(638) 00:14:30.974 fused_ordering(639) 00:14:30.974 fused_ordering(640) 00:14:30.974 fused_ordering(641) 00:14:30.974 fused_ordering(642) 00:14:30.974 fused_ordering(643) 00:14:30.974 fused_ordering(644) 00:14:30.974 fused_ordering(645) 00:14:30.974 fused_ordering(646) 00:14:30.974 fused_ordering(647) 00:14:30.974 fused_ordering(648) 00:14:30.974 fused_ordering(649) 00:14:30.974 fused_ordering(650) 00:14:30.974 fused_ordering(651) 00:14:30.974 fused_ordering(652) 00:14:30.974 fused_ordering(653) 00:14:30.974 fused_ordering(654) 00:14:30.974 fused_ordering(655) 00:14:30.974 fused_ordering(656) 00:14:30.974 fused_ordering(657) 00:14:30.974 fused_ordering(658) 00:14:30.974 fused_ordering(659) 00:14:30.974 fused_ordering(660) 00:14:30.974 fused_ordering(661) 00:14:30.974 fused_ordering(662) 00:14:30.974 fused_ordering(663) 00:14:30.974 fused_ordering(664) 00:14:30.974 fused_ordering(665) 00:14:30.974 fused_ordering(666) 00:14:30.974 fused_ordering(667) 00:14:30.974 fused_ordering(668) 00:14:30.974 fused_ordering(669) 00:14:30.974 fused_ordering(670) 00:14:30.974 fused_ordering(671) 00:14:30.974 fused_ordering(672) 00:14:30.974 fused_ordering(673) 00:14:30.974 fused_ordering(674) 00:14:30.974 fused_ordering(675) 00:14:30.974 fused_ordering(676) 00:14:30.974 fused_ordering(677) 00:14:30.974 fused_ordering(678) 00:14:30.974 fused_ordering(679) 00:14:30.974 fused_ordering(680) 00:14:30.974 fused_ordering(681) 00:14:30.974 fused_ordering(682) 00:14:30.974 fused_ordering(683) 00:14:30.974 fused_ordering(684) 00:14:30.974 fused_ordering(685) 00:14:30.974 fused_ordering(686) 00:14:30.974 fused_ordering(687) 00:14:30.974 fused_ordering(688) 00:14:30.974 fused_ordering(689) 00:14:30.974 fused_ordering(690) 00:14:30.974 fused_ordering(691) 00:14:30.974 fused_ordering(692) 00:14:30.974 fused_ordering(693) 00:14:30.974 fused_ordering(694) 00:14:30.974 fused_ordering(695) 00:14:30.974 fused_ordering(696) 00:14:30.974 fused_ordering(697) 00:14:30.974 fused_ordering(698) 00:14:30.974 fused_ordering(699) 00:14:30.974 fused_ordering(700) 00:14:30.974 fused_ordering(701) 00:14:30.974 fused_ordering(702) 00:14:30.974 fused_ordering(703) 00:14:30.974 fused_ordering(704) 00:14:30.974 fused_ordering(705) 00:14:30.974 fused_ordering(706) 00:14:30.974 fused_ordering(707) 00:14:30.974 fused_ordering(708) 00:14:30.974 fused_ordering(709) 00:14:30.974 fused_ordering(710) 00:14:30.974 fused_ordering(711) 00:14:30.974 fused_ordering(712) 00:14:30.974 fused_ordering(713) 00:14:30.974 fused_ordering(714) 00:14:30.974 fused_ordering(715) 00:14:30.974 fused_ordering(716) 00:14:30.974 fused_ordering(717) 00:14:30.974 fused_ordering(718) 00:14:30.974 fused_ordering(719) 00:14:30.974 fused_ordering(720) 00:14:30.974 fused_ordering(721) 00:14:30.974 fused_ordering(722) 00:14:30.974 fused_ordering(723) 00:14:30.974 fused_ordering(724) 00:14:30.974 fused_ordering(725) 00:14:30.974 fused_ordering(726) 00:14:30.974 fused_ordering(727) 00:14:30.974 fused_ordering(728) 00:14:30.974 fused_ordering(729) 00:14:30.974 fused_ordering(730) 00:14:30.974 fused_ordering(731) 00:14:30.974 fused_ordering(732) 00:14:30.974 fused_ordering(733) 00:14:30.974 fused_ordering(734) 00:14:30.974 fused_ordering(735) 00:14:30.974 fused_ordering(736) 00:14:30.974 fused_ordering(737) 00:14:30.974 fused_ordering(738) 00:14:30.974 fused_ordering(739) 00:14:30.974 fused_ordering(740) 00:14:30.974 fused_ordering(741) 00:14:30.974 fused_ordering(742) 00:14:30.974 fused_ordering(743) 00:14:30.974 fused_ordering(744) 00:14:30.974 fused_ordering(745) 00:14:30.974 fused_ordering(746) 00:14:30.974 fused_ordering(747) 00:14:30.974 fused_ordering(748) 00:14:30.974 fused_ordering(749) 00:14:30.974 fused_ordering(750) 00:14:30.974 fused_ordering(751) 00:14:30.974 fused_ordering(752) 00:14:30.974 fused_ordering(753) 00:14:30.974 fused_ordering(754) 00:14:30.974 fused_ordering(755) 00:14:30.974 fused_ordering(756) 00:14:30.974 fused_ordering(757) 00:14:30.974 fused_ordering(758) 00:14:30.974 fused_ordering(759) 00:14:30.974 fused_ordering(760) 00:14:30.974 fused_ordering(761) 00:14:30.974 fused_ordering(762) 00:14:30.974 fused_ordering(763) 00:14:30.974 fused_ordering(764) 00:14:30.974 fused_ordering(765) 00:14:30.974 fused_ordering(766) 00:14:30.974 fused_ordering(767) 00:14:30.974 fused_ordering(768) 00:14:30.974 fused_ordering(769) 00:14:30.974 fused_ordering(770) 00:14:30.974 fused_ordering(771) 00:14:30.974 fused_ordering(772) 00:14:30.974 fused_ordering(773) 00:14:30.974 fused_ordering(774) 00:14:30.974 fused_ordering(775) 00:14:30.974 fused_ordering(776) 00:14:30.974 fused_ordering(777) 00:14:30.974 fused_ordering(778) 00:14:30.974 fused_ordering(779) 00:14:30.974 fused_ordering(780) 00:14:30.974 fused_ordering(781) 00:14:30.974 fused_ordering(782) 00:14:30.974 fused_ordering(783) 00:14:30.974 fused_ordering(784) 00:14:30.974 fused_ordering(785) 00:14:30.974 fused_ordering(786) 00:14:30.974 fused_ordering(787) 00:14:30.974 fused_ordering(788) 00:14:30.974 fused_ordering(789) 00:14:30.974 fused_ordering(790) 00:14:30.974 fused_ordering(791) 00:14:30.974 fused_ordering(792) 00:14:30.974 fused_ordering(793) 00:14:30.974 fused_ordering(794) 00:14:30.974 fused_ordering(795) 00:14:30.974 fused_ordering(796) 00:14:30.974 fused_ordering(797) 00:14:30.974 fused_ordering(798) 00:14:30.974 fused_ordering(799) 00:14:30.974 fused_ordering(800) 00:14:30.974 fused_ordering(801) 00:14:30.974 fused_ordering(802) 00:14:30.974 fused_ordering(803) 00:14:30.974 fused_ordering(804) 00:14:30.974 fused_ordering(805) 00:14:30.974 fused_ordering(806) 00:14:30.974 fused_ordering(807) 00:14:30.974 fused_ordering(808) 00:14:30.974 fused_ordering(809) 00:14:30.974 fused_ordering(810) 00:14:30.974 fused_ordering(811) 00:14:30.974 fused_ordering(812) 00:14:30.974 fused_ordering(813) 00:14:30.974 fused_ordering(814) 00:14:30.974 fused_ordering(815) 00:14:30.974 fused_ordering(816) 00:14:30.974 fused_ordering(817) 00:14:30.975 fused_ordering(818) 00:14:30.975 fused_ordering(819) 00:14:30.975 fused_ordering(820) 00:14:31.541 fused_ordering(821) 00:14:31.541 fused_ordering(822) 00:14:31.541 fused_ordering(823) 00:14:31.541 fused_ordering(824) 00:14:31.541 fused_ordering(825) 00:14:31.541 fused_ordering(826) 00:14:31.541 fused_ordering(827) 00:14:31.541 fused_ordering(828) 00:14:31.541 fused_ordering(829) 00:14:31.541 fused_ordering(830) 00:14:31.541 fused_ordering(831) 00:14:31.541 fused_ordering(832) 00:14:31.541 fused_ordering(833) 00:14:31.541 fused_ordering(834) 00:14:31.541 fused_ordering(835) 00:14:31.541 fused_ordering(836) 00:14:31.541 fused_ordering(837) 00:14:31.541 fused_ordering(838) 00:14:31.541 fused_ordering(839) 00:14:31.541 fused_ordering(840) 00:14:31.541 fused_ordering(841) 00:14:31.541 fused_ordering(842) 00:14:31.541 fused_ordering(843) 00:14:31.541 fused_ordering(844) 00:14:31.541 fused_ordering(845) 00:14:31.541 fused_ordering(846) 00:14:31.541 fused_ordering(847) 00:14:31.541 fused_ordering(848) 00:14:31.541 fused_ordering(849) 00:14:31.541 fused_ordering(850) 00:14:31.541 fused_ordering(851) 00:14:31.541 fused_ordering(852) 00:14:31.541 fused_ordering(853) 00:14:31.541 fused_ordering(854) 00:14:31.541 fused_ordering(855) 00:14:31.541 fused_ordering(856) 00:14:31.541 fused_ordering(857) 00:14:31.541 fused_ordering(858) 00:14:31.541 fused_ordering(859) 00:14:31.541 fused_ordering(860) 00:14:31.541 fused_ordering(861) 00:14:31.541 fused_ordering(862) 00:14:31.541 fused_ordering(863) 00:14:31.541 fused_ordering(864) 00:14:31.541 fused_ordering(865) 00:14:31.541 fused_ordering(866) 00:14:31.541 fused_ordering(867) 00:14:31.541 fused_ordering(868) 00:14:31.541 fused_ordering(869) 00:14:31.541 fused_ordering(870) 00:14:31.541 fused_ordering(871) 00:14:31.541 fused_ordering(872) 00:14:31.541 fused_ordering(873) 00:14:31.541 fused_ordering(874) 00:14:31.541 fused_ordering(875) 00:14:31.541 fused_ordering(876) 00:14:31.541 fused_ordering(877) 00:14:31.541 fused_ordering(878) 00:14:31.541 fused_ordering(879) 00:14:31.541 fused_ordering(880) 00:14:31.541 fused_ordering(881) 00:14:31.541 fused_ordering(882) 00:14:31.541 fused_ordering(883) 00:14:31.541 fused_ordering(884) 00:14:31.541 fused_ordering(885) 00:14:31.541 fused_ordering(886) 00:14:31.541 fused_ordering(887) 00:14:31.541 fused_ordering(888) 00:14:31.541 fused_ordering(889) 00:14:31.541 fused_ordering(890) 00:14:31.541 fused_ordering(891) 00:14:31.541 fused_ordering(892) 00:14:31.541 fused_ordering(893) 00:14:31.541 fused_ordering(894) 00:14:31.541 fused_ordering(895) 00:14:31.541 fused_ordering(896) 00:14:31.541 fused_ordering(897) 00:14:31.541 fused_ordering(898) 00:14:31.541 fused_ordering(899) 00:14:31.541 fused_ordering(900) 00:14:31.541 fused_ordering(901) 00:14:31.541 fused_ordering(902) 00:14:31.541 fused_ordering(903) 00:14:31.541 fused_ordering(904) 00:14:31.541 fused_ordering(905) 00:14:31.541 fused_ordering(906) 00:14:31.541 fused_ordering(907) 00:14:31.541 fused_ordering(908) 00:14:31.541 fused_ordering(909) 00:14:31.541 fused_ordering(910) 00:14:31.541 fused_ordering(911) 00:14:31.541 fused_ordering(912) 00:14:31.541 fused_ordering(913) 00:14:31.541 fused_ordering(914) 00:14:31.541 fused_ordering(915) 00:14:31.541 fused_ordering(916) 00:14:31.541 fused_ordering(917) 00:14:31.541 fused_ordering(918) 00:14:31.541 fused_ordering(919) 00:14:31.541 fused_ordering(920) 00:14:31.541 fused_ordering(921) 00:14:31.541 fused_ordering(922) 00:14:31.541 fused_ordering(923) 00:14:31.541 fused_ordering(924) 00:14:31.541 fused_ordering(925) 00:14:31.541 fused_ordering(926) 00:14:31.541 fused_ordering(927) 00:14:31.541 fused_ordering(928) 00:14:31.541 fused_ordering(929) 00:14:31.541 fused_ordering(930) 00:14:31.541 fused_ordering(931) 00:14:31.541 fused_ordering(932) 00:14:31.541 fused_ordering(933) 00:14:31.541 fused_ordering(934) 00:14:31.541 fused_ordering(935) 00:14:31.541 fused_ordering(936) 00:14:31.541 fused_ordering(937) 00:14:31.541 fused_ordering(938) 00:14:31.541 fused_ordering(939) 00:14:31.541 fused_ordering(940) 00:14:31.541 fused_ordering(941) 00:14:31.541 fused_ordering(942) 00:14:31.541 fused_ordering(943) 00:14:31.541 fused_ordering(944) 00:14:31.541 fused_ordering(945) 00:14:31.541 fused_ordering(946) 00:14:31.541 fused_ordering(947) 00:14:31.541 fused_ordering(948) 00:14:31.541 fused_ordering(949) 00:14:31.541 fused_ordering(950) 00:14:31.541 fused_ordering(951) 00:14:31.541 fused_ordering(952) 00:14:31.541 fused_ordering(953) 00:14:31.541 fused_ordering(954) 00:14:31.542 fused_ordering(955) 00:14:31.542 fused_ordering(956) 00:14:31.542 fused_ordering(957) 00:14:31.542 fused_ordering(958) 00:14:31.542 fused_ordering(959) 00:14:31.542 fused_ordering(960) 00:14:31.542 fused_ordering(961) 00:14:31.542 fused_ordering(962) 00:14:31.542 fused_ordering(963) 00:14:31.542 fused_ordering(964) 00:14:31.542 fused_ordering(965) 00:14:31.542 fused_ordering(966) 00:14:31.542 fused_ordering(967) 00:14:31.542 fused_ordering(968) 00:14:31.542 fused_ordering(969) 00:14:31.542 fused_ordering(970) 00:14:31.542 fused_ordering(971) 00:14:31.542 fused_ordering(972) 00:14:31.542 fused_ordering(973) 00:14:31.542 fused_ordering(974) 00:14:31.542 fused_ordering(975) 00:14:31.542 fused_ordering(976) 00:14:31.542 fused_ordering(977) 00:14:31.542 fused_ordering(978) 00:14:31.542 fused_ordering(979) 00:14:31.542 fused_ordering(980) 00:14:31.542 fused_ordering(981) 00:14:31.542 fused_ordering(982) 00:14:31.542 fused_ordering(983) 00:14:31.542 fused_ordering(984) 00:14:31.542 fused_ordering(985) 00:14:31.542 fused_ordering(986) 00:14:31.542 fused_ordering(987) 00:14:31.542 fused_ordering(988) 00:14:31.542 fused_ordering(989) 00:14:31.542 fused_ordering(990) 00:14:31.542 fused_ordering(991) 00:14:31.542 fused_ordering(992) 00:14:31.542 fused_ordering(993) 00:14:31.542 fused_ordering(994) 00:14:31.542 fused_ordering(995) 00:14:31.542 fused_ordering(996) 00:14:31.542 fused_ordering(997) 00:14:31.542 fused_ordering(998) 00:14:31.542 fused_ordering(999) 00:14:31.542 fused_ordering(1000) 00:14:31.542 fused_ordering(1001) 00:14:31.542 fused_ordering(1002) 00:14:31.542 fused_ordering(1003) 00:14:31.542 fused_ordering(1004) 00:14:31.542 fused_ordering(1005) 00:14:31.542 fused_ordering(1006) 00:14:31.542 fused_ordering(1007) 00:14:31.542 fused_ordering(1008) 00:14:31.542 fused_ordering(1009) 00:14:31.542 fused_ordering(1010) 00:14:31.542 fused_ordering(1011) 00:14:31.542 fused_ordering(1012) 00:14:31.542 fused_ordering(1013) 00:14:31.542 fused_ordering(1014) 00:14:31.542 fused_ordering(1015) 00:14:31.542 fused_ordering(1016) 00:14:31.542 fused_ordering(1017) 00:14:31.542 fused_ordering(1018) 00:14:31.542 fused_ordering(1019) 00:14:31.542 fused_ordering(1020) 00:14:31.542 fused_ordering(1021) 00:14:31.542 fused_ordering(1022) 00:14:31.542 fused_ordering(1023) 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:31.542 rmmod nvme_tcp 00:14:31.542 rmmod nvme_fabrics 00:14:31.542 rmmod nvme_keyring 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1143026 ']' 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1143026 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1143026 ']' 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1143026 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1143026 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1143026' 00:14:31.542 killing process with pid 1143026 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1143026 00:14:31.542 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1143026 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.801 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:34.334 00:14:34.334 real 0m10.606s 00:14:34.334 user 0m4.984s 00:14:34.334 sys 0m5.758s 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:34.334 ************************************ 00:14:34.334 END TEST nvmf_fused_ordering 00:14:34.334 ************************************ 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:34.334 ************************************ 00:14:34.334 START TEST nvmf_ns_masking 00:14:34.334 ************************************ 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:34.334 * Looking for test storage... 00:14:34.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:34.334 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:34.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.335 --rc genhtml_branch_coverage=1 00:14:34.335 --rc genhtml_function_coverage=1 00:14:34.335 --rc genhtml_legend=1 00:14:34.335 --rc geninfo_all_blocks=1 00:14:34.335 --rc geninfo_unexecuted_blocks=1 00:14:34.335 00:14:34.335 ' 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:34.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.335 --rc genhtml_branch_coverage=1 00:14:34.335 --rc genhtml_function_coverage=1 00:14:34.335 --rc genhtml_legend=1 00:14:34.335 --rc geninfo_all_blocks=1 00:14:34.335 --rc geninfo_unexecuted_blocks=1 00:14:34.335 00:14:34.335 ' 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:34.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.335 --rc genhtml_branch_coverage=1 00:14:34.335 --rc genhtml_function_coverage=1 00:14:34.335 --rc genhtml_legend=1 00:14:34.335 --rc geninfo_all_blocks=1 00:14:34.335 --rc geninfo_unexecuted_blocks=1 00:14:34.335 00:14:34.335 ' 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:34.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.335 --rc genhtml_branch_coverage=1 00:14:34.335 --rc genhtml_function_coverage=1 00:14:34.335 --rc genhtml_legend=1 00:14:34.335 --rc geninfo_all_blocks=1 00:14:34.335 --rc geninfo_unexecuted_blocks=1 00:14:34.335 00:14:34.335 ' 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=62f50665-7e9c-443c-a1ce-a9a74247e8d4 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=30334e02-0a90-4461-97ea-f9b60231d058 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=04100830-d747-4116-86f5-5a38200d21c5 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:34.335 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:40.902 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:40.903 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:40.903 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:40.903 Found net devices under 0000:af:00.0: cvl_0_0 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:40.903 Found net devices under 0000:af:00.1: cvl_0_1 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:40.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:14:40.903 00:14:40.903 --- 10.0.0.2 ping statistics --- 00:14:40.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.903 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:14:40.903 00:14:40.903 --- 10.0.0.1 ping statistics --- 00:14:40.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.903 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1146799 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1146799 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1146799 ']' 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.903 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.903 [2024-12-10 05:39:27.995586] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:14:40.903 [2024-12-10 05:39:27.995636] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.903 [2024-12-10 05:39:28.074685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.903 [2024-12-10 05:39:28.113845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.903 [2024-12-10 05:39:28.113879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.903 [2024-12-10 05:39:28.113886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.903 [2024-12-10 05:39:28.113892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.903 [2024-12-10 05:39:28.113897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.903 [2024-12-10 05:39:28.114395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.903 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.903 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:40.904 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:40.904 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:40.904 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.904 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.904 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:40.904 [2024-12-10 05:39:28.410592] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.904 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:40.904 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:40.904 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:40.904 Malloc1 00:14:40.904 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:41.162 Malloc2 00:14:41.162 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:41.162 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:41.420 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.677 [2024-12-10 05:39:29.394400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.678 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:41.678 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 04100830-d747-4116-86f5-5a38200d21c5 -a 10.0.0.2 -s 4420 -i 4 00:14:41.678 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:41.678 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:41.678 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.678 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:41.678 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:44.204 [ 0]:0x1 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b13f78b643d24be3bc253523cb8703d8 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b13f78b643d24be3bc253523cb8703d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:44.204 [ 0]:0x1 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b13f78b643d24be3bc253523cb8703d8 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b13f78b643d24be3bc253523cb8703d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:44.204 [ 1]:0x2 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=559497cbdea748878230358900b27904 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 559497cbdea748878230358900b27904 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.204 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:44.205 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:44.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.463 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.463 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:44.721 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:44.721 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 04100830-d747-4116-86f5-5a38200d21c5 -a 10.0.0.2 -s 4420 -i 4 00:14:44.979 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:44.979 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:44.979 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.979 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:44.979 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:44.979 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:47.012 [ 0]:0x2 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:47.012 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.270 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=559497cbdea748878230358900b27904 00:14:47.270 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 559497cbdea748878230358900b27904 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.270 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:47.270 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:47.270 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.270 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:47.270 [ 0]:0x1 00:14:47.270 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:47.270 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.527 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b13f78b643d24be3bc253523cb8703d8 00:14:47.527 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b13f78b643d24be3bc253523cb8703d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.527 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:47.527 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.527 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:47.527 [ 1]:0x2 00:14:47.527 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:47.527 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.527 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=559497cbdea748878230358900b27904 00:14:47.528 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 559497cbdea748878230358900b27904 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.528 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.785 [ 0]:0x2 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=559497cbdea748878230358900b27904 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 559497cbdea748878230358900b27904 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.785 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:48.043 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:48.043 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 04100830-d747-4116-86f5-5a38200d21c5 -a 10.0.0.2 -s 4420 -i 4 00:14:48.043 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:48.043 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:48.043 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:48.043 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:48.043 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:48.043 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:50.571 [ 0]:0x1 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:50.571 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b13f78b643d24be3bc253523cb8703d8 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b13f78b643d24be3bc253523cb8703d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:50.571 [ 1]:0x2 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=559497cbdea748878230358900b27904 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 559497cbdea748878230358900b27904 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:50.571 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:50.572 [ 0]:0x2 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=559497cbdea748878230358900b27904 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 559497cbdea748878230358900b27904 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:50.572 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:50.830 [2024-12-10 05:39:38.544133] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:50.830 request: 00:14:50.830 { 00:14:50.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.830 "nsid": 2, 00:14:50.830 "host": "nqn.2016-06.io.spdk:host1", 00:14:50.830 "method": "nvmf_ns_remove_host", 00:14:50.830 "req_id": 1 00:14:50.830 } 00:14:50.830 Got JSON-RPC error response 00:14:50.830 response: 00:14:50.830 { 00:14:50.830 "code": -32602, 00:14:50.830 "message": "Invalid parameters" 00:14:50.830 } 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:50.830 [ 0]:0x2 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:50.830 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=559497cbdea748878230358900b27904 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 559497cbdea748878230358900b27904 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1148743 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1148743 /var/tmp/host.sock 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1148743 ']' 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:51.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.088 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.088 [2024-12-10 05:39:38.917994] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:14:51.088 [2024-12-10 05:39:38.918043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148743 ] 00:14:51.346 [2024-12-10 05:39:38.993727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.346 [2024-12-10 05:39:39.032842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.604 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.604 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:51.604 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.604 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:51.862 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 62f50665-7e9c-443c-a1ce-a9a74247e8d4 00:14:51.862 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:51.862 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 62F506657E9C443CA1CEA9A74247E8D4 -i 00:14:52.119 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 30334e02-0a90-4461-97ea-f9b60231d058 00:14:52.119 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:52.120 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 30334E020A90446197EAF9B60231D058 -i 00:14:52.377 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:52.377 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:52.635 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:52.635 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:52.892 nvme0n1 00:14:52.892 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:52.892 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:53.150 nvme1n2 00:14:53.150 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:53.150 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:53.150 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:53.150 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:53.150 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:53.408 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:53.408 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:53.408 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:53.408 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:53.666 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 62f50665-7e9c-443c-a1ce-a9a74247e8d4 == \6\2\f\5\0\6\6\5\-\7\e\9\c\-\4\4\3\c\-\a\1\c\e\-\a\9\a\7\4\2\4\7\e\8\d\4 ]] 00:14:53.666 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:53.666 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:53.666 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:53.923 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 30334e02-0a90-4461-97ea-f9b60231d058 == \3\0\3\3\4\e\0\2\-\0\a\9\0\-\4\4\6\1\-\9\7\e\a\-\f\9\b\6\0\2\3\1\d\0\5\8 ]] 00:14:53.923 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.181 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 62f50665-7e9c-443c-a1ce-a9a74247e8d4 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 62F506657E9C443CA1CEA9A74247E8D4 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 62F506657E9C443CA1CEA9A74247E8D4 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:54.181 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 62F506657E9C443CA1CEA9A74247E8D4 00:14:54.439 [2024-12-10 05:39:42.206595] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:54.439 [2024-12-10 05:39:42.206624] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:54.439 [2024-12-10 05:39:42.206632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.439 request: 00:14:54.439 { 00:14:54.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.439 "namespace": { 00:14:54.439 "bdev_name": "invalid", 00:14:54.439 "nsid": 1, 00:14:54.439 "nguid": "62F506657E9C443CA1CEA9A74247E8D4", 00:14:54.439 "no_auto_visible": false, 00:14:54.439 "hide_metadata": false 00:14:54.439 }, 00:14:54.439 "method": "nvmf_subsystem_add_ns", 00:14:54.439 "req_id": 1 00:14:54.439 } 00:14:54.439 Got JSON-RPC error response 00:14:54.439 response: 00:14:54.439 { 00:14:54.439 "code": -32602, 00:14:54.439 "message": "Invalid parameters" 00:14:54.439 } 00:14:54.439 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:54.439 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:54.439 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:54.439 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:54.439 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 62f50665-7e9c-443c-a1ce-a9a74247e8d4 00:14:54.439 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:54.439 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 62F506657E9C443CA1CEA9A74247E8D4 -i 00:14:54.697 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:56.595 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:56.595 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:56.595 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1148743 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1148743 ']' 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1148743 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1148743 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1148743' 00:14:56.853 killing process with pid 1148743 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1148743 00:14:56.853 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1148743 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:57.417 rmmod nvme_tcp 00:14:57.417 rmmod nvme_fabrics 00:14:57.417 rmmod nvme_keyring 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1146799 ']' 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1146799 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1146799 ']' 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1146799 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.417 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146799 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146799' 00:14:57.675 killing process with pid 1146799 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1146799 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1146799 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.675 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:00.211 00:15:00.211 real 0m25.919s 00:15:00.211 user 0m30.845s 00:15:00.211 sys 0m7.092s 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.211 ************************************ 00:15:00.211 END TEST nvmf_ns_masking 00:15:00.211 ************************************ 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:00.211 ************************************ 00:15:00.211 START TEST nvmf_nvme_cli 00:15:00.211 ************************************ 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:00.211 * Looking for test storage... 00:15:00.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:00.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.211 --rc genhtml_branch_coverage=1 00:15:00.211 --rc genhtml_function_coverage=1 00:15:00.211 --rc genhtml_legend=1 00:15:00.211 --rc geninfo_all_blocks=1 00:15:00.211 --rc geninfo_unexecuted_blocks=1 00:15:00.211 00:15:00.211 ' 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:00.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.211 --rc genhtml_branch_coverage=1 00:15:00.211 --rc genhtml_function_coverage=1 00:15:00.211 --rc genhtml_legend=1 00:15:00.211 --rc geninfo_all_blocks=1 00:15:00.211 --rc geninfo_unexecuted_blocks=1 00:15:00.211 00:15:00.211 ' 00:15:00.211 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:00.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.212 --rc genhtml_branch_coverage=1 00:15:00.212 --rc genhtml_function_coverage=1 00:15:00.212 --rc genhtml_legend=1 00:15:00.212 --rc geninfo_all_blocks=1 00:15:00.212 --rc geninfo_unexecuted_blocks=1 00:15:00.212 00:15:00.212 ' 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:00.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.212 --rc genhtml_branch_coverage=1 00:15:00.212 --rc genhtml_function_coverage=1 00:15:00.212 --rc genhtml_legend=1 00:15:00.212 --rc geninfo_all_blocks=1 00:15:00.212 --rc geninfo_unexecuted_blocks=1 00:15:00.212 00:15:00.212 ' 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:00.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:00.212 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:06.781 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:06.781 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:06.781 Found net devices under 0000:af:00.0: cvl_0_0 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:06.781 Found net devices under 0000:af:00.1: cvl_0_1 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.781 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:06.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:15:06.782 00:15:06.782 --- 10.0.0.2 ping statistics --- 00:15:06.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.782 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:15:06.782 00:15:06.782 --- 10.0.0.1 ping statistics --- 00:15:06.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.782 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1153370 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1153370 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1153370 ']' 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.782 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.782 [2024-12-10 05:39:53.951091] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:15:06.782 [2024-12-10 05:39:53.951138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.782 [2024-12-10 05:39:54.030538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.782 [2024-12-10 05:39:54.071970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.782 [2024-12-10 05:39:54.072006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.782 [2024-12-10 05:39:54.072013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.782 [2024-12-10 05:39:54.072019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.782 [2024-12-10 05:39:54.072024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.782 [2024-12-10 05:39:54.075184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.782 [2024-12-10 05:39:54.075212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.782 [2024-12-10 05:39:54.075311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.782 [2024-12-10 05:39:54.075310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.782 [2024-12-10 05:39:54.224978] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.782 Malloc0 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.782 Malloc1 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.782 [2024-12-10 05:39:54.317443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:06.782 00:15:06.782 Discovery Log Number of Records 2, Generation counter 2 00:15:06.782 =====Discovery Log Entry 0====== 00:15:06.782 trtype: tcp 00:15:06.782 adrfam: ipv4 00:15:06.782 subtype: current discovery subsystem 00:15:06.782 treq: not required 00:15:06.782 portid: 0 00:15:06.782 trsvcid: 4420 00:15:06.782 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:06.782 traddr: 10.0.0.2 00:15:06.782 eflags: explicit discovery connections, duplicate discovery information 00:15:06.782 sectype: none 00:15:06.782 =====Discovery Log Entry 1====== 00:15:06.782 trtype: tcp 00:15:06.782 adrfam: ipv4 00:15:06.782 subtype: nvme subsystem 00:15:06.782 treq: not required 00:15:06.782 portid: 0 00:15:06.782 trsvcid: 4420 00:15:06.782 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:06.782 traddr: 10.0.0.2 00:15:06.782 eflags: none 00:15:06.782 sectype: none 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:06.782 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:06.783 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:06.783 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:06.783 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:06.783 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:06.783 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:07.714 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:07.714 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:07.714 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.714 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:07.714 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:07.714 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:10.239 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:10.239 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:10.239 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:10.240 /dev/nvme0n2 ]] 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:10.240 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.240 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:10.240 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:10.240 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:10.240 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:10.498 rmmod nvme_tcp 00:15:10.498 rmmod nvme_fabrics 00:15:10.498 rmmod nvme_keyring 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:10.498 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1153370 ']' 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1153370 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1153370 ']' 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1153370 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1153370 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1153370' 00:15:10.499 killing process with pid 1153370 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1153370 00:15:10.499 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1153370 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.758 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.663 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:12.663 00:15:12.663 real 0m12.848s 00:15:12.663 user 0m19.231s 00:15:12.663 sys 0m5.038s 00:15:12.663 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.663 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.663 ************************************ 00:15:12.663 END TEST nvmf_nvme_cli 00:15:12.663 ************************************ 00:15:12.922 05:40:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.923 ************************************ 00:15:12.923 START TEST nvmf_vfio_user 00:15:12.923 ************************************ 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:12.923 * Looking for test storage... 00:15:12.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.923 --rc genhtml_branch_coverage=1 00:15:12.923 --rc genhtml_function_coverage=1 00:15:12.923 --rc genhtml_legend=1 00:15:12.923 --rc geninfo_all_blocks=1 00:15:12.923 --rc geninfo_unexecuted_blocks=1 00:15:12.923 00:15:12.923 ' 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.923 --rc genhtml_branch_coverage=1 00:15:12.923 --rc genhtml_function_coverage=1 00:15:12.923 --rc genhtml_legend=1 00:15:12.923 --rc geninfo_all_blocks=1 00:15:12.923 --rc geninfo_unexecuted_blocks=1 00:15:12.923 00:15:12.923 ' 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.923 --rc genhtml_branch_coverage=1 00:15:12.923 --rc genhtml_function_coverage=1 00:15:12.923 --rc genhtml_legend=1 00:15:12.923 --rc geninfo_all_blocks=1 00:15:12.923 --rc geninfo_unexecuted_blocks=1 00:15:12.923 00:15:12.923 ' 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:12.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.923 --rc genhtml_branch_coverage=1 00:15:12.923 --rc genhtml_function_coverage=1 00:15:12.923 --rc genhtml_legend=1 00:15:12.923 --rc geninfo_all_blocks=1 00:15:12.923 --rc geninfo_unexecuted_blocks=1 00:15:12.923 00:15:12.923 ' 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.923 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:13.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1154629 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1154629' 00:15:13.183 Process pid: 1154629 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1154629 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1154629 ']' 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.183 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:13.183 [2024-12-10 05:40:00.887281] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:15:13.183 [2024-12-10 05:40:00.887331] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.183 [2024-12-10 05:40:00.960078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.183 [2024-12-10 05:40:01.000782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.183 [2024-12-10 05:40:01.000813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.184 [2024-12-10 05:40:01.000821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.184 [2024-12-10 05:40:01.000828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.184 [2024-12-10 05:40:01.000833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.184 [2024-12-10 05:40:01.002288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.184 [2024-12-10 05:40:01.002419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.184 [2024-12-10 05:40:01.002537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.184 [2024-12-10 05:40:01.002538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:13.442 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.442 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:13.442 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:14.378 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:14.637 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:14.637 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:14.637 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.637 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:14.637 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:14.895 Malloc1 00:15:14.896 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:14.896 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:15.154 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:15.412 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:15.412 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:15.412 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:15.671 Malloc2 00:15:15.671 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:15.671 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:15.930 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:16.190 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:16.190 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:16.190 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:16.190 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:16.190 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:16.190 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:16.190 [2024-12-10 05:40:03.976387] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:15:16.190 [2024-12-10 05:40:03.976428] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155242 ] 00:15:16.190 [2024-12-10 05:40:04.017650] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:16.190 [2024-12-10 05:40:04.021176] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:16.190 [2024-12-10 05:40:04.021198] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f14b11b0000 00:15:16.190 [2024-12-10 05:40:04.021963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.190 [2024-12-10 05:40:04.022959] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.190 [2024-12-10 05:40:04.023972] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.190 [2024-12-10 05:40:04.024975] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:16.190 [2024-12-10 05:40:04.025978] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:16.190 [2024-12-10 05:40:04.026981] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.190 [2024-12-10 05:40:04.027985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:16.190 [2024-12-10 05:40:04.028990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:16.190 [2024-12-10 05:40:04.029995] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:16.190 [2024-12-10 05:40:04.030005] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f14b11a5000 00:15:16.190 [2024-12-10 05:40:04.030923] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:16.190 [2024-12-10 05:40:04.043373] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:16.190 [2024-12-10 05:40:04.043396] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:16.191 [2024-12-10 05:40:04.049120] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:16.191 [2024-12-10 05:40:04.049161] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:16.191 [2024-12-10 05:40:04.049237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:16.191 [2024-12-10 05:40:04.049251] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:16.191 [2024-12-10 05:40:04.049257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:16.191 [2024-12-10 05:40:04.050113] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:16.191 [2024-12-10 05:40:04.050122] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:16.191 [2024-12-10 05:40:04.050128] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:16.191 [2024-12-10 05:40:04.051115] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:16.191 [2024-12-10 05:40:04.051126] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:16.191 [2024-12-10 05:40:04.051133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:16.191 [2024-12-10 05:40:04.052121] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:16.191 [2024-12-10 05:40:04.052129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:16.191 [2024-12-10 05:40:04.053125] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:16.191 [2024-12-10 05:40:04.053132] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:16.191 [2024-12-10 05:40:04.053136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:16.191 [2024-12-10 05:40:04.053142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:16.191 [2024-12-10 05:40:04.053249] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:16.191 [2024-12-10 05:40:04.053254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:16.191 [2024-12-10 05:40:04.053258] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:16.191 [2024-12-10 05:40:04.054136] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:16.191 [2024-12-10 05:40:04.055136] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:16.191 [2024-12-10 05:40:04.056144] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:16.191 [2024-12-10 05:40:04.057146] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.191 [2024-12-10 05:40:04.057232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:16.191 [2024-12-10 05:40:04.058164] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:16.191 [2024-12-10 05:40:04.058176] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:16.191 [2024-12-10 05:40:04.058181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058197] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:16.191 [2024-12-10 05:40:04.058204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058220] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.191 [2024-12-10 05:40:04.058225] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.191 [2024-12-10 05:40:04.058229] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.191 [2024-12-10 05:40:04.058240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.191 [2024-12-10 05:40:04.058291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:16.191 [2024-12-10 05:40:04.058299] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:16.191 [2024-12-10 05:40:04.058304] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:16.191 [2024-12-10 05:40:04.058307] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:16.191 [2024-12-10 05:40:04.058312] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:16.191 [2024-12-10 05:40:04.058316] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:16.191 [2024-12-10 05:40:04.058320] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:16.191 [2024-12-10 05:40:04.058324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:16.191 [2024-12-10 05:40:04.058350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:16.191 [2024-12-10 05:40:04.058360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.191 [2024-12-10 05:40:04.058367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.191 [2024-12-10 05:40:04.058375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.191 [2024-12-10 05:40:04.058382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.191 [2024-12-10 05:40:04.058386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:16.191 [2024-12-10 05:40:04.058412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:16.191 [2024-12-10 05:40:04.058417] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:16.191 [2024-12-10 05:40:04.058421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:16.191 [2024-12-10 05:40:04.058460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:16.191 [2024-12-10 05:40:04.058508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058521] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:16.191 [2024-12-10 05:40:04.058525] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:16.191 [2024-12-10 05:40:04.058528] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.191 [2024-12-10 05:40:04.058534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:16.191 [2024-12-10 05:40:04.058547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:16.191 [2024-12-10 05:40:04.058556] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:16.191 [2024-12-10 05:40:04.058566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058579] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.191 [2024-12-10 05:40:04.058582] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.191 [2024-12-10 05:40:04.058585] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.191 [2024-12-10 05:40:04.058591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.191 [2024-12-10 05:40:04.058617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:16.191 [2024-12-10 05:40:04.058625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:16.191 [2024-12-10 05:40:04.058638] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:16.191 [2024-12-10 05:40:04.058641] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.191 [2024-12-10 05:40:04.058644] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.191 [2024-12-10 05:40:04.058650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.191 [2024-12-10 05:40:04.058662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:16.192 [2024-12-10 05:40:04.058671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:16.192 [2024-12-10 05:40:04.058677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:16.192 [2024-12-10 05:40:04.058683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:16.192 [2024-12-10 05:40:04.058688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:16.192 [2024-12-10 05:40:04.058693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:16.192 [2024-12-10 05:40:04.058698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:16.192 [2024-12-10 05:40:04.058702] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:16.192 [2024-12-10 05:40:04.058706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:16.192 [2024-12-10 05:40:04.058711] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:16.192 [2024-12-10 05:40:04.058726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:16.192 [2024-12-10 05:40:04.058734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:16.192 [2024-12-10 05:40:04.058744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:16.192 [2024-12-10 05:40:04.058754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:16.192 [2024-12-10 05:40:04.058763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:16.192 [2024-12-10 05:40:04.058774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:16.192 [2024-12-10 05:40:04.058783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:16.192 [2024-12-10 05:40:04.058793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:16.192 [2024-12-10 05:40:04.058804] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:16.192 [2024-12-10 05:40:04.058808] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:16.192 [2024-12-10 05:40:04.058811] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:16.192 [2024-12-10 05:40:04.058814] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:16.192 [2024-12-10 05:40:04.058817] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:16.192 [2024-12-10 05:40:04.058823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:16.192 [2024-12-10 05:40:04.058829] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:16.192 [2024-12-10 05:40:04.058833] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:16.192 [2024-12-10 05:40:04.058836] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.192 [2024-12-10 05:40:04.058841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:16.192 [2024-12-10 05:40:04.058847] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:16.192 [2024-12-10 05:40:04.058851] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:16.192 [2024-12-10 05:40:04.058854] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.192 [2024-12-10 05:40:04.058860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:16.192 [2024-12-10 05:40:04.058867] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:16.192 [2024-12-10 05:40:04.058871] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:16.192 [2024-12-10 05:40:04.058874] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:16.192 [2024-12-10 05:40:04.058879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:16.192 [2024-12-10 05:40:04.058886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:16.192 [2024-12-10 05:40:04.058897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:16.192 [2024-12-10 05:40:04.058906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:16.192 [2024-12-10 05:40:04.058912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:16.192 ===================================================== 00:15:16.192 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.192 ===================================================== 00:15:16.192 Controller Capabilities/Features 00:15:16.192 ================================ 00:15:16.192 Vendor ID: 4e58 00:15:16.192 Subsystem Vendor ID: 4e58 00:15:16.192 Serial Number: SPDK1 00:15:16.192 Model Number: SPDK bdev Controller 00:15:16.192 Firmware Version: 25.01 00:15:16.192 Recommended Arb Burst: 6 00:15:16.192 IEEE OUI Identifier: 8d 6b 50 00:15:16.192 Multi-path I/O 00:15:16.192 May have multiple subsystem ports: Yes 00:15:16.192 May have multiple controllers: Yes 00:15:16.192 Associated with SR-IOV VF: No 00:15:16.192 Max Data Transfer Size: 131072 00:15:16.192 Max Number of Namespaces: 32 00:15:16.192 Max Number of I/O Queues: 127 00:15:16.192 NVMe Specification Version (VS): 1.3 00:15:16.192 NVMe Specification Version (Identify): 1.3 00:15:16.192 Maximum Queue Entries: 256 00:15:16.192 Contiguous Queues Required: Yes 00:15:16.192 Arbitration Mechanisms Supported 00:15:16.192 Weighted Round Robin: Not Supported 00:15:16.192 Vendor Specific: Not Supported 00:15:16.192 Reset Timeout: 15000 ms 00:15:16.192 Doorbell Stride: 4 bytes 00:15:16.192 NVM Subsystem Reset: Not Supported 00:15:16.192 Command Sets Supported 00:15:16.192 NVM Command Set: Supported 00:15:16.192 Boot Partition: Not Supported 00:15:16.192 Memory Page Size Minimum: 4096 bytes 00:15:16.192 Memory Page Size Maximum: 4096 bytes 00:15:16.192 Persistent Memory Region: Not Supported 00:15:16.192 Optional Asynchronous Events Supported 00:15:16.192 Namespace Attribute Notices: Supported 00:15:16.192 Firmware Activation Notices: Not Supported 00:15:16.192 ANA Change Notices: Not Supported 00:15:16.192 PLE Aggregate Log Change Notices: Not Supported 00:15:16.192 LBA Status Info Alert Notices: Not Supported 00:15:16.192 EGE Aggregate Log Change Notices: Not Supported 00:15:16.192 Normal NVM Subsystem Shutdown event: Not Supported 00:15:16.192 Zone Descriptor Change Notices: Not Supported 00:15:16.192 Discovery Log Change Notices: Not Supported 00:15:16.192 Controller Attributes 00:15:16.192 128-bit Host Identifier: Supported 00:15:16.192 Non-Operational Permissive Mode: Not Supported 00:15:16.192 NVM Sets: Not Supported 00:15:16.192 Read Recovery Levels: Not Supported 00:15:16.192 Endurance Groups: Not Supported 00:15:16.192 Predictable Latency Mode: Not Supported 00:15:16.192 Traffic Based Keep ALive: Not Supported 00:15:16.192 Namespace Granularity: Not Supported 00:15:16.192 SQ Associations: Not Supported 00:15:16.192 UUID List: Not Supported 00:15:16.192 Multi-Domain Subsystem: Not Supported 00:15:16.192 Fixed Capacity Management: Not Supported 00:15:16.192 Variable Capacity Management: Not Supported 00:15:16.192 Delete Endurance Group: Not Supported 00:15:16.192 Delete NVM Set: Not Supported 00:15:16.192 Extended LBA Formats Supported: Not Supported 00:15:16.192 Flexible Data Placement Supported: Not Supported 00:15:16.192 00:15:16.192 Controller Memory Buffer Support 00:15:16.192 ================================ 00:15:16.192 Supported: No 00:15:16.192 00:15:16.192 Persistent Memory Region Support 00:15:16.192 ================================ 00:15:16.192 Supported: No 00:15:16.192 00:15:16.192 Admin Command Set Attributes 00:15:16.192 ============================ 00:15:16.192 Security Send/Receive: Not Supported 00:15:16.192 Format NVM: Not Supported 00:15:16.192 Firmware Activate/Download: Not Supported 00:15:16.192 Namespace Management: Not Supported 00:15:16.192 Device Self-Test: Not Supported 00:15:16.192 Directives: Not Supported 00:15:16.192 NVMe-MI: Not Supported 00:15:16.192 Virtualization Management: Not Supported 00:15:16.192 Doorbell Buffer Config: Not Supported 00:15:16.192 Get LBA Status Capability: Not Supported 00:15:16.192 Command & Feature Lockdown Capability: Not Supported 00:15:16.192 Abort Command Limit: 4 00:15:16.192 Async Event Request Limit: 4 00:15:16.192 Number of Firmware Slots: N/A 00:15:16.192 Firmware Slot 1 Read-Only: N/A 00:15:16.192 Firmware Activation Without Reset: N/A 00:15:16.192 Multiple Update Detection Support: N/A 00:15:16.192 Firmware Update Granularity: No Information Provided 00:15:16.192 Per-Namespace SMART Log: No 00:15:16.192 Asymmetric Namespace Access Log Page: Not Supported 00:15:16.192 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:16.192 Command Effects Log Page: Supported 00:15:16.192 Get Log Page Extended Data: Supported 00:15:16.192 Telemetry Log Pages: Not Supported 00:15:16.192 Persistent Event Log Pages: Not Supported 00:15:16.192 Supported Log Pages Log Page: May Support 00:15:16.192 Commands Supported & Effects Log Page: Not Supported 00:15:16.193 Feature Identifiers & Effects Log Page:May Support 00:15:16.193 NVMe-MI Commands & Effects Log Page: May Support 00:15:16.193 Data Area 4 for Telemetry Log: Not Supported 00:15:16.193 Error Log Page Entries Supported: 128 00:15:16.193 Keep Alive: Supported 00:15:16.193 Keep Alive Granularity: 10000 ms 00:15:16.193 00:15:16.193 NVM Command Set Attributes 00:15:16.193 ========================== 00:15:16.193 Submission Queue Entry Size 00:15:16.193 Max: 64 00:15:16.193 Min: 64 00:15:16.193 Completion Queue Entry Size 00:15:16.193 Max: 16 00:15:16.193 Min: 16 00:15:16.193 Number of Namespaces: 32 00:15:16.193 Compare Command: Supported 00:15:16.193 Write Uncorrectable Command: Not Supported 00:15:16.193 Dataset Management Command: Supported 00:15:16.193 Write Zeroes Command: Supported 00:15:16.193 Set Features Save Field: Not Supported 00:15:16.193 Reservations: Not Supported 00:15:16.193 Timestamp: Not Supported 00:15:16.193 Copy: Supported 00:15:16.193 Volatile Write Cache: Present 00:15:16.193 Atomic Write Unit (Normal): 1 00:15:16.193 Atomic Write Unit (PFail): 1 00:15:16.193 Atomic Compare & Write Unit: 1 00:15:16.193 Fused Compare & Write: Supported 00:15:16.193 Scatter-Gather List 00:15:16.193 SGL Command Set: Supported (Dword aligned) 00:15:16.193 SGL Keyed: Not Supported 00:15:16.193 SGL Bit Bucket Descriptor: Not Supported 00:15:16.193 SGL Metadata Pointer: Not Supported 00:15:16.193 Oversized SGL: Not Supported 00:15:16.193 SGL Metadata Address: Not Supported 00:15:16.193 SGL Offset: Not Supported 00:15:16.193 Transport SGL Data Block: Not Supported 00:15:16.193 Replay Protected Memory Block: Not Supported 00:15:16.193 00:15:16.193 Firmware Slot Information 00:15:16.193 ========================= 00:15:16.193 Active slot: 1 00:15:16.193 Slot 1 Firmware Revision: 25.01 00:15:16.193 00:15:16.193 00:15:16.193 Commands Supported and Effects 00:15:16.193 ============================== 00:15:16.193 Admin Commands 00:15:16.193 -------------- 00:15:16.193 Get Log Page (02h): Supported 00:15:16.193 Identify (06h): Supported 00:15:16.193 Abort (08h): Supported 00:15:16.193 Set Features (09h): Supported 00:15:16.193 Get Features (0Ah): Supported 00:15:16.193 Asynchronous Event Request (0Ch): Supported 00:15:16.193 Keep Alive (18h): Supported 00:15:16.193 I/O Commands 00:15:16.193 ------------ 00:15:16.193 Flush (00h): Supported LBA-Change 00:15:16.193 Write (01h): Supported LBA-Change 00:15:16.193 Read (02h): Supported 00:15:16.193 Compare (05h): Supported 00:15:16.193 Write Zeroes (08h): Supported LBA-Change 00:15:16.193 Dataset Management (09h): Supported LBA-Change 00:15:16.193 Copy (19h): Supported LBA-Change 00:15:16.193 00:15:16.193 Error Log 00:15:16.193 ========= 00:15:16.193 00:15:16.193 Arbitration 00:15:16.193 =========== 00:15:16.193 Arbitration Burst: 1 00:15:16.193 00:15:16.193 Power Management 00:15:16.193 ================ 00:15:16.193 Number of Power States: 1 00:15:16.193 Current Power State: Power State #0 00:15:16.193 Power State #0: 00:15:16.193 Max Power: 0.00 W 00:15:16.193 Non-Operational State: Operational 00:15:16.193 Entry Latency: Not Reported 00:15:16.193 Exit Latency: Not Reported 00:15:16.193 Relative Read Throughput: 0 00:15:16.193 Relative Read Latency: 0 00:15:16.193 Relative Write Throughput: 0 00:15:16.193 Relative Write Latency: 0 00:15:16.193 Idle Power: Not Reported 00:15:16.193 Active Power: Not Reported 00:15:16.193 Non-Operational Permissive Mode: Not Supported 00:15:16.193 00:15:16.193 Health Information 00:15:16.193 ================== 00:15:16.193 Critical Warnings: 00:15:16.193 Available Spare Space: OK 00:15:16.193 Temperature: OK 00:15:16.193 Device Reliability: OK 00:15:16.193 Read Only: No 00:15:16.193 Volatile Memory Backup: OK 00:15:16.193 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:16.193 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:16.193 Available Spare: 0% 00:15:16.193 Available Sp[2024-12-10 05:40:04.058990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:16.193 [2024-12-10 05:40:04.058999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:16.193 [2024-12-10 05:40:04.059023] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:16.193 [2024-12-10 05:40:04.059031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.193 [2024-12-10 05:40:04.059036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.193 [2024-12-10 05:40:04.059041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.193 [2024-12-10 05:40:04.059047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.193 [2024-12-10 05:40:04.059173] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:16.193 [2024-12-10 05:40:04.059183] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:16.193 [2024-12-10 05:40:04.060173] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.193 [2024-12-10 05:40:04.060222] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:16.193 [2024-12-10 05:40:04.060228] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:16.193 [2024-12-10 05:40:04.061182] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:16.193 [2024-12-10 05:40:04.061193] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:16.193 [2024-12-10 05:40:04.061238] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:16.193 [2024-12-10 05:40:04.062206] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:16.452 are Threshold: 0% 00:15:16.452 Life Percentage Used: 0% 00:15:16.452 Data Units Read: 0 00:15:16.452 Data Units Written: 0 00:15:16.452 Host Read Commands: 0 00:15:16.452 Host Write Commands: 0 00:15:16.452 Controller Busy Time: 0 minutes 00:15:16.452 Power Cycles: 0 00:15:16.452 Power On Hours: 0 hours 00:15:16.452 Unsafe Shutdowns: 0 00:15:16.452 Unrecoverable Media Errors: 0 00:15:16.452 Lifetime Error Log Entries: 0 00:15:16.452 Warning Temperature Time: 0 minutes 00:15:16.452 Critical Temperature Time: 0 minutes 00:15:16.452 00:15:16.452 Number of Queues 00:15:16.452 ================ 00:15:16.452 Number of I/O Submission Queues: 127 00:15:16.452 Number of I/O Completion Queues: 127 00:15:16.452 00:15:16.452 Active Namespaces 00:15:16.452 ================= 00:15:16.452 Namespace ID:1 00:15:16.452 Error Recovery Timeout: Unlimited 00:15:16.452 Command Set Identifier: NVM (00h) 00:15:16.452 Deallocate: Supported 00:15:16.452 Deallocated/Unwritten Error: Not Supported 00:15:16.452 Deallocated Read Value: Unknown 00:15:16.452 Deallocate in Write Zeroes: Not Supported 00:15:16.452 Deallocated Guard Field: 0xFFFF 00:15:16.452 Flush: Supported 00:15:16.452 Reservation: Supported 00:15:16.452 Namespace Sharing Capabilities: Multiple Controllers 00:15:16.452 Size (in LBAs): 131072 (0GiB) 00:15:16.452 Capacity (in LBAs): 131072 (0GiB) 00:15:16.452 Utilization (in LBAs): 131072 (0GiB) 00:15:16.452 NGUID: 2A2A4EC2D207440E8EC12F8D73E43864 00:15:16.452 UUID: 2a2a4ec2-d207-440e-8ec1-2f8d73e43864 00:15:16.452 Thin Provisioning: Not Supported 00:15:16.452 Per-NS Atomic Units: Yes 00:15:16.452 Atomic Boundary Size (Normal): 0 00:15:16.452 Atomic Boundary Size (PFail): 0 00:15:16.452 Atomic Boundary Offset: 0 00:15:16.452 Maximum Single Source Range Length: 65535 00:15:16.452 Maximum Copy Length: 65535 00:15:16.452 Maximum Source Range Count: 1 00:15:16.452 NGUID/EUI64 Never Reused: No 00:15:16.452 Namespace Write Protected: No 00:15:16.452 Number of LBA Formats: 1 00:15:16.452 Current LBA Format: LBA Format #00 00:15:16.452 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:16.453 00:15:16.453 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:16.453 [2024-12-10 05:40:04.293267] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.717 Initializing NVMe Controllers 00:15:21.717 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:21.717 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:21.717 Initialization complete. Launching workers. 00:15:21.717 ======================================================== 00:15:21.717 Latency(us) 00:15:21.717 Device Information : IOPS MiB/s Average min max 00:15:21.717 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39936.59 156.00 3204.91 951.50 9610.26 00:15:21.717 ======================================================== 00:15:21.717 Total : 39936.59 156.00 3204.91 951.50 9610.26 00:15:21.717 00:15:21.717 [2024-12-10 05:40:09.313204] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.717 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:21.717 [2024-12-10 05:40:09.550291] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.001 Initializing NVMe Controllers 00:15:27.001 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.001 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:27.001 Initialization complete. Launching workers. 00:15:27.001 ======================================================== 00:15:27.001 Latency(us) 00:15:27.001 Device Information : IOPS MiB/s Average min max 00:15:27.001 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.10 62.71 7978.40 7387.86 8351.00 00:15:27.001 ======================================================== 00:15:27.001 Total : 16054.10 62.71 7978.40 7387.86 8351.00 00:15:27.001 00:15:27.001 [2024-12-10 05:40:14.592194] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.001 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:27.001 [2024-12-10 05:40:14.802191] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:32.264 [2024-12-10 05:40:19.876476] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:32.264 Initializing NVMe Controllers 00:15:32.264 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:32.264 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:32.264 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:32.264 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:32.264 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:32.264 Initialization complete. Launching workers. 00:15:32.264 Starting thread on core 2 00:15:32.264 Starting thread on core 3 00:15:32.264 Starting thread on core 1 00:15:32.264 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:32.522 [2024-12-10 05:40:20.182591] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.807 [2024-12-10 05:40:23.250621] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.807 Initializing NVMe Controllers 00:15:35.807 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.807 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.807 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:35.807 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:35.807 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:35.807 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:35.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:35.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:35.807 Initialization complete. Launching workers. 00:15:35.807 Starting thread on core 1 with urgent priority queue 00:15:35.807 Starting thread on core 2 with urgent priority queue 00:15:35.807 Starting thread on core 3 with urgent priority queue 00:15:35.807 Starting thread on core 0 with urgent priority queue 00:15:35.807 SPDK bdev Controller (SPDK1 ) core 0: 7672.00 IO/s 13.03 secs/100000 ios 00:15:35.807 SPDK bdev Controller (SPDK1 ) core 1: 8545.33 IO/s 11.70 secs/100000 ios 00:15:35.807 SPDK bdev Controller (SPDK1 ) core 2: 9874.67 IO/s 10.13 secs/100000 ios 00:15:35.807 SPDK bdev Controller (SPDK1 ) core 3: 9022.00 IO/s 11.08 secs/100000 ios 00:15:35.807 ======================================================== 00:15:35.807 00:15:35.807 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:35.807 [2024-12-10 05:40:23.538679] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.807 Initializing NVMe Controllers 00:15:35.807 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.807 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.807 Namespace ID: 1 size: 0GB 00:15:35.807 Initialization complete. 00:15:35.807 INFO: using host memory buffer for IO 00:15:35.807 Hello world! 00:15:35.807 [2024-12-10 05:40:23.572900] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.807 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:36.065 [2024-12-10 05:40:23.851596] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:36.998 Initializing NVMe Controllers 00:15:36.998 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.998 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.998 Initialization complete. Launching workers. 00:15:36.998 submit (in ns) avg, min, max = 7285.4, 3181.0, 4003753.3 00:15:36.998 complete (in ns) avg, min, max = 21424.0, 1759.0, 4004371.4 00:15:36.998 00:15:36.998 Submit histogram 00:15:36.998 ================ 00:15:36.998 Range in us Cumulative Count 00:15:36.998 3.170 - 3.185: 0.0061% ( 1) 00:15:36.998 3.185 - 3.200: 0.1596% ( 25) 00:15:36.998 3.200 - 3.215: 1.3504% ( 194) 00:15:36.998 3.215 - 3.230: 4.2784% ( 477) 00:15:36.998 3.230 - 3.246: 8.0842% ( 620) 00:15:36.998 3.246 - 3.261: 12.6266% ( 740) 00:15:36.998 3.261 - 3.276: 18.8509% ( 1014) 00:15:36.998 3.276 - 3.291: 24.7008% ( 953) 00:15:36.998 3.291 - 3.307: 30.6795% ( 974) 00:15:36.998 3.307 - 3.322: 36.3759% ( 928) 00:15:36.998 3.322 - 3.337: 42.7230% ( 1034) 00:15:36.998 3.337 - 3.352: 47.4372% ( 768) 00:15:36.998 3.352 - 3.368: 52.7162% ( 860) 00:15:36.998 3.368 - 3.383: 59.0817% ( 1037) 00:15:36.998 3.383 - 3.398: 64.0354% ( 807) 00:15:36.998 3.398 - 3.413: 69.2652% ( 852) 00:15:36.998 3.413 - 3.429: 74.1514% ( 796) 00:15:36.998 3.429 - 3.444: 78.7367% ( 747) 00:15:36.998 3.444 - 3.459: 81.8918% ( 514) 00:15:36.998 3.459 - 3.474: 84.3779% ( 405) 00:15:36.998 3.474 - 3.490: 86.1887% ( 295) 00:15:36.998 3.490 - 3.505: 87.5330% ( 219) 00:15:36.998 3.505 - 3.520: 88.3678% ( 136) 00:15:36.998 3.520 - 3.535: 89.0799% ( 116) 00:15:36.998 3.535 - 3.550: 89.8410% ( 124) 00:15:36.998 3.550 - 3.566: 90.5838% ( 121) 00:15:36.998 3.566 - 3.581: 91.4493% ( 141) 00:15:36.998 3.581 - 3.596: 92.1245% ( 110) 00:15:36.998 3.596 - 3.611: 92.9777% ( 139) 00:15:36.998 3.611 - 3.627: 93.8494% ( 142) 00:15:36.998 3.627 - 3.642: 94.7271% ( 143) 00:15:36.998 3.642 - 3.657: 95.6786% ( 155) 00:15:36.998 3.657 - 3.672: 96.3845% ( 115) 00:15:36.998 3.672 - 3.688: 97.1211% ( 120) 00:15:36.998 3.688 - 3.703: 97.7165% ( 97) 00:15:36.998 3.703 - 3.718: 98.0910% ( 61) 00:15:36.998 3.718 - 3.733: 98.6496% ( 91) 00:15:36.998 3.733 - 3.749: 98.9319% ( 46) 00:15:36.998 3.749 - 3.764: 99.1406% ( 34) 00:15:36.998 3.764 - 3.779: 99.2941% ( 25) 00:15:36.998 3.779 - 3.794: 99.4230% ( 21) 00:15:36.998 3.794 - 3.810: 99.5028% ( 13) 00:15:36.998 3.810 - 3.825: 99.5335% ( 5) 00:15:36.998 3.825 - 3.840: 99.5642% ( 5) 00:15:36.998 3.840 - 3.855: 99.5765% ( 2) 00:15:36.998 3.855 - 3.870: 99.5887% ( 2) 00:15:36.998 3.870 - 3.886: 99.5949% ( 1) 00:15:36.998 3.886 - 3.901: 99.6010% ( 1) 00:15:36.998 3.901 - 3.931: 99.6133% ( 2) 00:15:36.998 3.931 - 3.962: 99.6378% ( 4) 00:15:36.998 3.962 - 3.992: 99.6501% ( 2) 00:15:36.998 4.053 - 4.084: 99.6563% ( 1) 00:15:36.998 4.084 - 4.114: 99.6624% ( 1) 00:15:36.998 4.328 - 4.358: 99.6685% ( 1) 00:15:36.998 4.450 - 4.480: 99.6747% ( 1) 00:15:36.998 5.181 - 5.211: 99.6808% ( 1) 00:15:36.998 5.333 - 5.364: 99.6869% ( 1) 00:15:36.998 5.364 - 5.394: 99.6931% ( 1) 00:15:36.998 5.425 - 5.455: 99.6992% ( 1) 00:15:36.998 5.455 - 5.486: 99.7054% ( 1) 00:15:36.998 5.608 - 5.638: 99.7115% ( 1) 00:15:36.998 5.669 - 5.699: 99.7176% ( 1) 00:15:36.998 5.760 - 5.790: 99.7299% ( 2) 00:15:36.998 5.882 - 5.912: 99.7361% ( 1) 00:15:36.998 5.973 - 6.004: 99.7422% ( 1) 00:15:36.998 6.248 - 6.278: 99.7483% ( 1) 00:15:36.998 6.400 - 6.430: 99.7545% ( 1) 00:15:36.998 6.430 - 6.461: 99.7606% ( 1) 00:15:36.998 6.461 - 6.491: 99.7667% ( 1) 00:15:36.998 6.705 - 6.735: 99.7790% ( 2) 00:15:36.998 6.766 - 6.796: 99.7852% ( 1) 00:15:36.998 6.888 - 6.918: 99.7913% ( 1) 00:15:36.998 7.040 - 7.070: 99.7974% ( 1) 00:15:36.998 7.101 - 7.131: 99.8097% ( 2) 00:15:36.998 7.223 - 7.253: 99.8158% ( 1) 00:15:36.998 7.314 - 7.345: 99.8220% ( 1) 00:15:36.998 7.345 - 7.375: 99.8281% ( 1) 00:15:36.998 7.375 - 7.406: 99.8343% ( 1) 00:15:36.998 7.406 - 7.436: 99.8404% ( 1) 00:15:36.998 [2024-12-10 05:40:24.872782] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:37.257 7.467 - 7.497: 99.8465% ( 1) 00:15:37.257 7.589 - 7.619: 99.8527% ( 1) 00:15:37.257 7.619 - 7.650: 99.8588% ( 1) 00:15:37.257 8.046 - 8.107: 99.8650% ( 1) 00:15:37.257 8.168 - 8.229: 99.8711% ( 1) 00:15:37.257 8.350 - 8.411: 99.8772% ( 1) 00:15:37.257 8.594 - 8.655: 99.8834% ( 1) 00:15:37.257 8.716 - 8.777: 99.8895% ( 1) 00:15:37.257 14.263 - 14.324: 99.8956% ( 1) 00:15:37.257 14.994 - 15.055: 99.9018% ( 1) 00:15:37.257 3573.272 - 3588.876: 99.9079% ( 1) 00:15:37.257 3994.575 - 4025.783: 100.0000% ( 15) 00:15:37.257 00:15:37.257 Complete histogram 00:15:37.257 ================== 00:15:37.257 Range in us Cumulative Count 00:15:37.257 1.752 - 1.760: 0.0123% ( 2) 00:15:37.257 1.760 - 1.768: 1.1110% ( 179) 00:15:37.257 1.768 - 1.775: 12.9151% ( 1923) 00:15:37.257 1.775 - 1.783: 42.0416% ( 4745) 00:15:37.257 1.783 - 1.790: 62.2921% ( 3299) 00:15:37.257 1.790 - 1.798: 67.6815% ( 878) 00:15:37.257 1.798 - 1.806: 70.3579% ( 436) 00:15:37.257 1.806 - 1.813: 72.1564% ( 293) 00:15:37.257 1.813 - 1.821: 72.9114% ( 123) 00:15:37.257 1.821 - 1.829: 73.8690% ( 156) 00:15:37.257 1.829 - 1.836: 78.2395% ( 712) 00:15:37.257 1.836 - 1.844: 86.1457% ( 1288) 00:15:37.257 1.844 - 1.851: 92.6708% ( 1063) 00:15:37.257 1.851 - 1.859: 95.7400% ( 500) 00:15:37.257 1.859 - 1.867: 97.2132% ( 240) 00:15:37.257 1.867 - 1.874: 97.9191% ( 115) 00:15:37.257 1.874 - 1.882: 98.1830% ( 43) 00:15:37.257 1.882 - 1.890: 98.3488% ( 27) 00:15:37.257 1.890 - 1.897: 98.5084% ( 26) 00:15:37.257 1.897 - 1.905: 98.5759% ( 11) 00:15:37.257 1.905 - 1.912: 98.7355% ( 26) 00:15:37.257 1.912 - 1.920: 98.9012% ( 27) 00:15:37.257 1.920 - 1.928: 99.0117% ( 18) 00:15:37.257 1.928 - 1.935: 99.0915% ( 13) 00:15:37.257 1.935 - 1.943: 99.1161% ( 4) 00:15:37.257 1.943 - 1.950: 99.1590% ( 7) 00:15:37.257 1.950 - 1.966: 99.1836% ( 4) 00:15:37.257 1.981 - 1.996: 99.1897% ( 1) 00:15:37.257 1.996 - 2.011: 99.2020% ( 2) 00:15:37.257 2.011 - 2.027: 99.2082% ( 1) 00:15:37.257 2.027 - 2.042: 99.2204% ( 2) 00:15:37.257 2.042 - 2.057: 99.2266% ( 1) 00:15:37.257 2.057 - 2.072: 99.2388% ( 2) 00:15:37.257 2.072 - 2.088: 99.2573% ( 3) 00:15:37.257 2.088 - 2.103: 99.2634% ( 1) 00:15:37.257 2.118 - 2.133: 99.2757% ( 2) 00:15:37.257 2.149 - 2.164: 99.2880% ( 2) 00:15:37.257 2.210 - 2.225: 99.2941% ( 1) 00:15:37.257 2.286 - 2.301: 99.3002% ( 1) 00:15:37.257 2.316 - 2.331: 99.3064% ( 1) 00:15:37.257 2.331 - 2.347: 99.3125% ( 1) 00:15:37.257 2.408 - 2.423: 99.3186% ( 1) 00:15:37.257 2.560 - 2.575: 99.3248% ( 1) 00:15:37.257 3.688 - 3.703: 99.3309% ( 1) 00:15:37.257 3.886 - 3.901: 99.3371% ( 1) 00:15:37.257 3.931 - 3.962: 99.3432% ( 1) 00:15:37.257 3.992 - 4.023: 99.3493% ( 1) 00:15:37.257 4.236 - 4.267: 99.3555% ( 1) 00:15:37.257 4.571 - 4.602: 99.3616% ( 1) 00:15:37.257 4.846 - 4.876: 99.3677% ( 1) 00:15:37.257 4.998 - 5.029: 99.3739% ( 1) 00:15:37.257 5.120 - 5.150: 99.3862% ( 2) 00:15:37.257 5.333 - 5.364: 99.3984% ( 2) 00:15:37.257 5.516 - 5.547: 99.4046% ( 1) 00:15:37.257 5.730 - 5.760: 99.4107% ( 1) 00:15:37.257 5.760 - 5.790: 99.4169% ( 1) 00:15:37.257 5.851 - 5.882: 99.4230% ( 1) 00:15:37.257 5.912 - 5.943: 99.4291% ( 1) 00:15:37.257 5.943 - 5.973: 99.4353% ( 1) 00:15:37.257 6.004 - 6.034: 99.4414% ( 1) 00:15:37.257 6.187 - 6.217: 99.4475% ( 1) 00:15:37.257 6.522 - 6.552: 99.4537% ( 1) 00:15:37.257 6.552 - 6.583: 99.4598% ( 1) 00:15:37.257 6.613 - 6.644: 99.4660% ( 1) 00:15:37.257 7.131 - 7.162: 99.4721% ( 1) 00:15:37.257 7.162 - 7.192: 99.4782% ( 1) 00:15:37.257 7.345 - 7.375: 99.4844% ( 1) 00:15:37.257 7.375 - 7.406: 99.4905% ( 1) 00:15:37.257 10.118 - 10.179: 99.4967% ( 1) 00:15:37.257 11.032 - 11.093: 99.5028% ( 1) 00:15:37.257 12.251 - 12.312: 99.5089% ( 1) 00:15:37.257 3994.575 - 4025.783: 100.0000% ( 80) 00:15:37.257 00:15:37.257 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:37.257 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:37.257 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:37.257 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:37.257 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:37.257 [ 00:15:37.257 { 00:15:37.257 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:37.257 "subtype": "Discovery", 00:15:37.257 "listen_addresses": [], 00:15:37.257 "allow_any_host": true, 00:15:37.257 "hosts": [] 00:15:37.257 }, 00:15:37.257 { 00:15:37.257 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:37.257 "subtype": "NVMe", 00:15:37.257 "listen_addresses": [ 00:15:37.257 { 00:15:37.257 "trtype": "VFIOUSER", 00:15:37.257 "adrfam": "IPv4", 00:15:37.257 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:37.257 "trsvcid": "0" 00:15:37.257 } 00:15:37.257 ], 00:15:37.257 "allow_any_host": true, 00:15:37.257 "hosts": [], 00:15:37.257 "serial_number": "SPDK1", 00:15:37.257 "model_number": "SPDK bdev Controller", 00:15:37.257 "max_namespaces": 32, 00:15:37.257 "min_cntlid": 1, 00:15:37.257 "max_cntlid": 65519, 00:15:37.258 "namespaces": [ 00:15:37.258 { 00:15:37.258 "nsid": 1, 00:15:37.258 "bdev_name": "Malloc1", 00:15:37.258 "name": "Malloc1", 00:15:37.258 "nguid": "2A2A4EC2D207440E8EC12F8D73E43864", 00:15:37.258 "uuid": "2a2a4ec2-d207-440e-8ec1-2f8d73e43864" 00:15:37.258 } 00:15:37.258 ] 00:15:37.258 }, 00:15:37.258 { 00:15:37.258 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:37.258 "subtype": "NVMe", 00:15:37.258 "listen_addresses": [ 00:15:37.258 { 00:15:37.258 "trtype": "VFIOUSER", 00:15:37.258 "adrfam": "IPv4", 00:15:37.258 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:37.258 "trsvcid": "0" 00:15:37.258 } 00:15:37.258 ], 00:15:37.258 "allow_any_host": true, 00:15:37.258 "hosts": [], 00:15:37.258 "serial_number": "SPDK2", 00:15:37.258 "model_number": "SPDK bdev Controller", 00:15:37.258 "max_namespaces": 32, 00:15:37.258 "min_cntlid": 1, 00:15:37.258 "max_cntlid": 65519, 00:15:37.258 "namespaces": [ 00:15:37.258 { 00:15:37.258 "nsid": 1, 00:15:37.258 "bdev_name": "Malloc2", 00:15:37.258 "name": "Malloc2", 00:15:37.258 "nguid": "83EFC3A4091A40568DFDD840F05991B1", 00:15:37.258 "uuid": "83efc3a4-091a-4056-8dfd-d840f05991b1" 00:15:37.258 } 00:15:37.258 ] 00:15:37.258 } 00:15:37.258 ] 00:15:37.258 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:37.258 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1158670 00:15:37.258 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:37.258 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:37.258 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:37.258 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:37.258 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:37.258 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:37.258 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:37.258 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:37.515 [2024-12-10 05:40:25.272597] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:37.515 Malloc3 00:15:37.515 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:37.772 [2024-12-10 05:40:25.507358] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:37.772 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:37.772 Asynchronous Event Request test 00:15:37.772 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.772 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.772 Registering asynchronous event callbacks... 00:15:37.772 Starting namespace attribute notice tests for all controllers... 00:15:37.772 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:37.772 aer_cb - Changed Namespace 00:15:37.772 Cleaning up... 00:15:38.031 [ 00:15:38.031 { 00:15:38.031 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:38.031 "subtype": "Discovery", 00:15:38.031 "listen_addresses": [], 00:15:38.031 "allow_any_host": true, 00:15:38.031 "hosts": [] 00:15:38.031 }, 00:15:38.031 { 00:15:38.031 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:38.031 "subtype": "NVMe", 00:15:38.031 "listen_addresses": [ 00:15:38.031 { 00:15:38.031 "trtype": "VFIOUSER", 00:15:38.031 "adrfam": "IPv4", 00:15:38.031 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:38.031 "trsvcid": "0" 00:15:38.031 } 00:15:38.031 ], 00:15:38.031 "allow_any_host": true, 00:15:38.031 "hosts": [], 00:15:38.031 "serial_number": "SPDK1", 00:15:38.031 "model_number": "SPDK bdev Controller", 00:15:38.031 "max_namespaces": 32, 00:15:38.031 "min_cntlid": 1, 00:15:38.031 "max_cntlid": 65519, 00:15:38.031 "namespaces": [ 00:15:38.031 { 00:15:38.031 "nsid": 1, 00:15:38.031 "bdev_name": "Malloc1", 00:15:38.031 "name": "Malloc1", 00:15:38.031 "nguid": "2A2A4EC2D207440E8EC12F8D73E43864", 00:15:38.031 "uuid": "2a2a4ec2-d207-440e-8ec1-2f8d73e43864" 00:15:38.031 }, 00:15:38.031 { 00:15:38.031 "nsid": 2, 00:15:38.031 "bdev_name": "Malloc3", 00:15:38.031 "name": "Malloc3", 00:15:38.031 "nguid": "A2A51AEC520B48B382A804F60C3000D9", 00:15:38.031 "uuid": "a2a51aec-520b-48b3-82a8-04f60c3000d9" 00:15:38.031 } 00:15:38.031 ] 00:15:38.031 }, 00:15:38.031 { 00:15:38.031 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:38.031 "subtype": "NVMe", 00:15:38.031 "listen_addresses": [ 00:15:38.031 { 00:15:38.031 "trtype": "VFIOUSER", 00:15:38.031 "adrfam": "IPv4", 00:15:38.031 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:38.031 "trsvcid": "0" 00:15:38.031 } 00:15:38.031 ], 00:15:38.031 "allow_any_host": true, 00:15:38.031 "hosts": [], 00:15:38.031 "serial_number": "SPDK2", 00:15:38.031 "model_number": "SPDK bdev Controller", 00:15:38.031 "max_namespaces": 32, 00:15:38.031 "min_cntlid": 1, 00:15:38.031 "max_cntlid": 65519, 00:15:38.031 "namespaces": [ 00:15:38.031 { 00:15:38.031 "nsid": 1, 00:15:38.031 "bdev_name": "Malloc2", 00:15:38.031 "name": "Malloc2", 00:15:38.031 "nguid": "83EFC3A4091A40568DFDD840F05991B1", 00:15:38.031 "uuid": "83efc3a4-091a-4056-8dfd-d840f05991b1" 00:15:38.031 } 00:15:38.031 ] 00:15:38.031 } 00:15:38.031 ] 00:15:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1158670 00:15:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:38.032 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:38.032 [2024-12-10 05:40:25.756730] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:15:38.032 [2024-12-10 05:40:25.756758] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158749 ] 00:15:38.032 [2024-12-10 05:40:25.801864] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:38.032 [2024-12-10 05:40:25.810425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:38.032 [2024-12-10 05:40:25.810453] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f670b5ab000 00:15:38.032 [2024-12-10 05:40:25.811433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.032 [2024-12-10 05:40:25.812440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.032 [2024-12-10 05:40:25.813449] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.032 [2024-12-10 05:40:25.814454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.032 [2024-12-10 05:40:25.815460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.032 [2024-12-10 05:40:25.816464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.032 [2024-12-10 05:40:25.817475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:38.032 [2024-12-10 05:40:25.818479] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:38.032 [2024-12-10 05:40:25.819493] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:38.032 [2024-12-10 05:40:25.819504] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f670b5a0000 00:15:38.032 [2024-12-10 05:40:25.820423] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:38.032 [2024-12-10 05:40:25.833438] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:38.032 [2024-12-10 05:40:25.833467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:38.032 [2024-12-10 05:40:25.835532] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:38.032 [2024-12-10 05:40:25.835569] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:38.032 [2024-12-10 05:40:25.835639] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:38.032 [2024-12-10 05:40:25.835654] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:38.032 [2024-12-10 05:40:25.835659] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:38.032 [2024-12-10 05:40:25.836533] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:38.032 [2024-12-10 05:40:25.836544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:38.032 [2024-12-10 05:40:25.836550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:38.032 [2024-12-10 05:40:25.841174] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:38.032 [2024-12-10 05:40:25.841184] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:38.032 [2024-12-10 05:40:25.841194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:38.032 [2024-12-10 05:40:25.841578] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:38.032 [2024-12-10 05:40:25.841587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:38.032 [2024-12-10 05:40:25.842589] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:38.032 [2024-12-10 05:40:25.842598] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:38.032 [2024-12-10 05:40:25.842602] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:38.032 [2024-12-10 05:40:25.842608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:38.032 [2024-12-10 05:40:25.842716] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:38.032 [2024-12-10 05:40:25.842720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:38.032 [2024-12-10 05:40:25.842724] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:38.032 [2024-12-10 05:40:25.843595] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:38.032 [2024-12-10 05:40:25.844601] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:38.032 [2024-12-10 05:40:25.845605] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:38.032 [2024-12-10 05:40:25.846608] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.032 [2024-12-10 05:40:25.846648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:38.032 [2024-12-10 05:40:25.847620] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:38.032 [2024-12-10 05:40:25.847629] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:38.032 [2024-12-10 05:40:25.847634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:38.032 [2024-12-10 05:40:25.847651] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:38.032 [2024-12-10 05:40:25.847658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:38.032 [2024-12-10 05:40:25.847673] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.032 [2024-12-10 05:40:25.847677] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.032 [2024-12-10 05:40:25.847680] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.032 [2024-12-10 05:40:25.847691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.032 [2024-12-10 05:40:25.852177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:38.032 [2024-12-10 05:40:25.852190] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:38.032 [2024-12-10 05:40:25.852194] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:38.032 [2024-12-10 05:40:25.852198] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:38.032 [2024-12-10 05:40:25.852202] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:38.032 [2024-12-10 05:40:25.852206] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:38.032 [2024-12-10 05:40:25.852210] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:38.032 [2024-12-10 05:40:25.852215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:38.032 [2024-12-10 05:40:25.852222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:38.032 [2024-12-10 05:40:25.852231] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:38.032 [2024-12-10 05:40:25.859175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:38.032 [2024-12-10 05:40:25.859190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.032 [2024-12-10 05:40:25.859198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.032 [2024-12-10 05:40:25.859206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.032 [2024-12-10 05:40:25.859213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.032 [2024-12-10 05:40:25.859218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:38.032 [2024-12-10 05:40:25.859228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:38.032 [2024-12-10 05:40:25.859237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:38.032 [2024-12-10 05:40:25.867173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:38.032 [2024-12-10 05:40:25.867181] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:38.032 [2024-12-10 05:40:25.867186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:38.032 [2024-12-10 05:40:25.867197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:38.032 [2024-12-10 05:40:25.867203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.867211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:38.033 [2024-12-10 05:40:25.875172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:38.033 [2024-12-10 05:40:25.875228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.875236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.875243] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:38.033 [2024-12-10 05:40:25.875247] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:38.033 [2024-12-10 05:40:25.875250] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.033 [2024-12-10 05:40:25.875256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:38.033 [2024-12-10 05:40:25.883173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:38.033 [2024-12-10 05:40:25.883187] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:38.033 [2024-12-10 05:40:25.883194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.883201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.883208] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.033 [2024-12-10 05:40:25.883212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.033 [2024-12-10 05:40:25.883215] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.033 [2024-12-10 05:40:25.883221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.033 [2024-12-10 05:40:25.891173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:38.033 [2024-12-10 05:40:25.891185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.891192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.891199] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:38.033 [2024-12-10 05:40:25.891203] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.033 [2024-12-10 05:40:25.891206] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.033 [2024-12-10 05:40:25.891211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.033 [2024-12-10 05:40:25.899173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:38.033 [2024-12-10 05:40:25.899185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.899192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.899198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.899204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.899211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.899215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.899220] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:38.033 [2024-12-10 05:40:25.899224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:38.033 [2024-12-10 05:40:25.899229] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:38.033 [2024-12-10 05:40:25.899245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:38.033 [2024-12-10 05:40:25.907173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:38.033 [2024-12-10 05:40:25.907185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:38.033 [2024-12-10 05:40:25.915173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:38.033 [2024-12-10 05:40:25.915186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:38.292 [2024-12-10 05:40:25.923178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:38.292 [2024-12-10 05:40:25.923203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:38.292 [2024-12-10 05:40:25.931174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:38.292 [2024-12-10 05:40:25.931199] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:38.292 [2024-12-10 05:40:25.931204] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:38.292 [2024-12-10 05:40:25.931208] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:38.292 [2024-12-10 05:40:25.931211] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:38.292 [2024-12-10 05:40:25.931214] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:38.292 [2024-12-10 05:40:25.931220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:38.292 [2024-12-10 05:40:25.931227] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:38.292 [2024-12-10 05:40:25.931231] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:38.292 [2024-12-10 05:40:25.931234] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.292 [2024-12-10 05:40:25.931240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:38.292 [2024-12-10 05:40:25.931246] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:38.292 [2024-12-10 05:40:25.931250] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:38.292 [2024-12-10 05:40:25.931253] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.292 [2024-12-10 05:40:25.931258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:38.292 [2024-12-10 05:40:25.931265] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:38.292 [2024-12-10 05:40:25.931271] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:38.292 [2024-12-10 05:40:25.931274] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:38.292 [2024-12-10 05:40:25.931279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:38.292 [2024-12-10 05:40:25.939174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:38.292 [2024-12-10 05:40:25.939189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:38.292 [2024-12-10 05:40:25.939199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:38.292 [2024-12-10 05:40:25.939205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:38.292 ===================================================== 00:15:38.292 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.292 ===================================================== 00:15:38.292 Controller Capabilities/Features 00:15:38.292 ================================ 00:15:38.292 Vendor ID: 4e58 00:15:38.292 Subsystem Vendor ID: 4e58 00:15:38.292 Serial Number: SPDK2 00:15:38.292 Model Number: SPDK bdev Controller 00:15:38.292 Firmware Version: 25.01 00:15:38.292 Recommended Arb Burst: 6 00:15:38.292 IEEE OUI Identifier: 8d 6b 50 00:15:38.292 Multi-path I/O 00:15:38.292 May have multiple subsystem ports: Yes 00:15:38.292 May have multiple controllers: Yes 00:15:38.292 Associated with SR-IOV VF: No 00:15:38.292 Max Data Transfer Size: 131072 00:15:38.292 Max Number of Namespaces: 32 00:15:38.292 Max Number of I/O Queues: 127 00:15:38.292 NVMe Specification Version (VS): 1.3 00:15:38.292 NVMe Specification Version (Identify): 1.3 00:15:38.292 Maximum Queue Entries: 256 00:15:38.292 Contiguous Queues Required: Yes 00:15:38.292 Arbitration Mechanisms Supported 00:15:38.292 Weighted Round Robin: Not Supported 00:15:38.292 Vendor Specific: Not Supported 00:15:38.292 Reset Timeout: 15000 ms 00:15:38.292 Doorbell Stride: 4 bytes 00:15:38.292 NVM Subsystem Reset: Not Supported 00:15:38.292 Command Sets Supported 00:15:38.292 NVM Command Set: Supported 00:15:38.292 Boot Partition: Not Supported 00:15:38.292 Memory Page Size Minimum: 4096 bytes 00:15:38.292 Memory Page Size Maximum: 4096 bytes 00:15:38.292 Persistent Memory Region: Not Supported 00:15:38.292 Optional Asynchronous Events Supported 00:15:38.292 Namespace Attribute Notices: Supported 00:15:38.292 Firmware Activation Notices: Not Supported 00:15:38.292 ANA Change Notices: Not Supported 00:15:38.292 PLE Aggregate Log Change Notices: Not Supported 00:15:38.292 LBA Status Info Alert Notices: Not Supported 00:15:38.292 EGE Aggregate Log Change Notices: Not Supported 00:15:38.292 Normal NVM Subsystem Shutdown event: Not Supported 00:15:38.292 Zone Descriptor Change Notices: Not Supported 00:15:38.292 Discovery Log Change Notices: Not Supported 00:15:38.292 Controller Attributes 00:15:38.292 128-bit Host Identifier: Supported 00:15:38.292 Non-Operational Permissive Mode: Not Supported 00:15:38.292 NVM Sets: Not Supported 00:15:38.292 Read Recovery Levels: Not Supported 00:15:38.292 Endurance Groups: Not Supported 00:15:38.292 Predictable Latency Mode: Not Supported 00:15:38.292 Traffic Based Keep ALive: Not Supported 00:15:38.292 Namespace Granularity: Not Supported 00:15:38.292 SQ Associations: Not Supported 00:15:38.292 UUID List: Not Supported 00:15:38.292 Multi-Domain Subsystem: Not Supported 00:15:38.292 Fixed Capacity Management: Not Supported 00:15:38.293 Variable Capacity Management: Not Supported 00:15:38.293 Delete Endurance Group: Not Supported 00:15:38.293 Delete NVM Set: Not Supported 00:15:38.293 Extended LBA Formats Supported: Not Supported 00:15:38.293 Flexible Data Placement Supported: Not Supported 00:15:38.293 00:15:38.293 Controller Memory Buffer Support 00:15:38.293 ================================ 00:15:38.293 Supported: No 00:15:38.293 00:15:38.293 Persistent Memory Region Support 00:15:38.293 ================================ 00:15:38.293 Supported: No 00:15:38.293 00:15:38.293 Admin Command Set Attributes 00:15:38.293 ============================ 00:15:38.293 Security Send/Receive: Not Supported 00:15:38.293 Format NVM: Not Supported 00:15:38.293 Firmware Activate/Download: Not Supported 00:15:38.293 Namespace Management: Not Supported 00:15:38.293 Device Self-Test: Not Supported 00:15:38.293 Directives: Not Supported 00:15:38.293 NVMe-MI: Not Supported 00:15:38.293 Virtualization Management: Not Supported 00:15:38.293 Doorbell Buffer Config: Not Supported 00:15:38.293 Get LBA Status Capability: Not Supported 00:15:38.293 Command & Feature Lockdown Capability: Not Supported 00:15:38.293 Abort Command Limit: 4 00:15:38.293 Async Event Request Limit: 4 00:15:38.293 Number of Firmware Slots: N/A 00:15:38.293 Firmware Slot 1 Read-Only: N/A 00:15:38.293 Firmware Activation Without Reset: N/A 00:15:38.293 Multiple Update Detection Support: N/A 00:15:38.293 Firmware Update Granularity: No Information Provided 00:15:38.293 Per-Namespace SMART Log: No 00:15:38.293 Asymmetric Namespace Access Log Page: Not Supported 00:15:38.293 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:38.293 Command Effects Log Page: Supported 00:15:38.293 Get Log Page Extended Data: Supported 00:15:38.293 Telemetry Log Pages: Not Supported 00:15:38.293 Persistent Event Log Pages: Not Supported 00:15:38.293 Supported Log Pages Log Page: May Support 00:15:38.293 Commands Supported & Effects Log Page: Not Supported 00:15:38.293 Feature Identifiers & Effects Log Page:May Support 00:15:38.293 NVMe-MI Commands & Effects Log Page: May Support 00:15:38.293 Data Area 4 for Telemetry Log: Not Supported 00:15:38.293 Error Log Page Entries Supported: 128 00:15:38.293 Keep Alive: Supported 00:15:38.293 Keep Alive Granularity: 10000 ms 00:15:38.293 00:15:38.293 NVM Command Set Attributes 00:15:38.293 ========================== 00:15:38.293 Submission Queue Entry Size 00:15:38.293 Max: 64 00:15:38.293 Min: 64 00:15:38.293 Completion Queue Entry Size 00:15:38.293 Max: 16 00:15:38.293 Min: 16 00:15:38.293 Number of Namespaces: 32 00:15:38.293 Compare Command: Supported 00:15:38.293 Write Uncorrectable Command: Not Supported 00:15:38.293 Dataset Management Command: Supported 00:15:38.293 Write Zeroes Command: Supported 00:15:38.293 Set Features Save Field: Not Supported 00:15:38.293 Reservations: Not Supported 00:15:38.293 Timestamp: Not Supported 00:15:38.293 Copy: Supported 00:15:38.293 Volatile Write Cache: Present 00:15:38.293 Atomic Write Unit (Normal): 1 00:15:38.293 Atomic Write Unit (PFail): 1 00:15:38.293 Atomic Compare & Write Unit: 1 00:15:38.293 Fused Compare & Write: Supported 00:15:38.293 Scatter-Gather List 00:15:38.293 SGL Command Set: Supported (Dword aligned) 00:15:38.293 SGL Keyed: Not Supported 00:15:38.293 SGL Bit Bucket Descriptor: Not Supported 00:15:38.293 SGL Metadata Pointer: Not Supported 00:15:38.293 Oversized SGL: Not Supported 00:15:38.293 SGL Metadata Address: Not Supported 00:15:38.293 SGL Offset: Not Supported 00:15:38.293 Transport SGL Data Block: Not Supported 00:15:38.293 Replay Protected Memory Block: Not Supported 00:15:38.293 00:15:38.293 Firmware Slot Information 00:15:38.293 ========================= 00:15:38.293 Active slot: 1 00:15:38.293 Slot 1 Firmware Revision: 25.01 00:15:38.293 00:15:38.293 00:15:38.293 Commands Supported and Effects 00:15:38.293 ============================== 00:15:38.293 Admin Commands 00:15:38.293 -------------- 00:15:38.293 Get Log Page (02h): Supported 00:15:38.293 Identify (06h): Supported 00:15:38.293 Abort (08h): Supported 00:15:38.293 Set Features (09h): Supported 00:15:38.293 Get Features (0Ah): Supported 00:15:38.293 Asynchronous Event Request (0Ch): Supported 00:15:38.293 Keep Alive (18h): Supported 00:15:38.293 I/O Commands 00:15:38.293 ------------ 00:15:38.293 Flush (00h): Supported LBA-Change 00:15:38.293 Write (01h): Supported LBA-Change 00:15:38.293 Read (02h): Supported 00:15:38.293 Compare (05h): Supported 00:15:38.293 Write Zeroes (08h): Supported LBA-Change 00:15:38.293 Dataset Management (09h): Supported LBA-Change 00:15:38.293 Copy (19h): Supported LBA-Change 00:15:38.293 00:15:38.293 Error Log 00:15:38.293 ========= 00:15:38.293 00:15:38.293 Arbitration 00:15:38.293 =========== 00:15:38.293 Arbitration Burst: 1 00:15:38.293 00:15:38.293 Power Management 00:15:38.293 ================ 00:15:38.293 Number of Power States: 1 00:15:38.293 Current Power State: Power State #0 00:15:38.293 Power State #0: 00:15:38.293 Max Power: 0.00 W 00:15:38.293 Non-Operational State: Operational 00:15:38.293 Entry Latency: Not Reported 00:15:38.293 Exit Latency: Not Reported 00:15:38.293 Relative Read Throughput: 0 00:15:38.293 Relative Read Latency: 0 00:15:38.293 Relative Write Throughput: 0 00:15:38.293 Relative Write Latency: 0 00:15:38.293 Idle Power: Not Reported 00:15:38.293 Active Power: Not Reported 00:15:38.293 Non-Operational Permissive Mode: Not Supported 00:15:38.293 00:15:38.293 Health Information 00:15:38.293 ================== 00:15:38.293 Critical Warnings: 00:15:38.293 Available Spare Space: OK 00:15:38.293 Temperature: OK 00:15:38.293 Device Reliability: OK 00:15:38.293 Read Only: No 00:15:38.293 Volatile Memory Backup: OK 00:15:38.293 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:38.293 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:38.293 Available Spare: 0% 00:15:38.293 Available Sp[2024-12-10 05:40:25.939290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:38.293 [2024-12-10 05:40:25.947175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:38.293 [2024-12-10 05:40:25.947208] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:38.293 [2024-12-10 05:40:25.947216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.293 [2024-12-10 05:40:25.947222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.293 [2024-12-10 05:40:25.947228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.293 [2024-12-10 05:40:25.947233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.293 [2024-12-10 05:40:25.947279] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:38.293 [2024-12-10 05:40:25.947289] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:38.293 [2024-12-10 05:40:25.948278] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.293 [2024-12-10 05:40:25.948323] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:38.293 [2024-12-10 05:40:25.948330] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:38.293 [2024-12-10 05:40:25.949280] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:38.293 [2024-12-10 05:40:25.949291] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:38.293 [2024-12-10 05:40:25.949340] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:38.293 [2024-12-10 05:40:25.952175] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:38.293 are Threshold: 0% 00:15:38.293 Life Percentage Used: 0% 00:15:38.293 Data Units Read: 0 00:15:38.293 Data Units Written: 0 00:15:38.293 Host Read Commands: 0 00:15:38.293 Host Write Commands: 0 00:15:38.293 Controller Busy Time: 0 minutes 00:15:38.293 Power Cycles: 0 00:15:38.293 Power On Hours: 0 hours 00:15:38.293 Unsafe Shutdowns: 0 00:15:38.293 Unrecoverable Media Errors: 0 00:15:38.293 Lifetime Error Log Entries: 0 00:15:38.293 Warning Temperature Time: 0 minutes 00:15:38.293 Critical Temperature Time: 0 minutes 00:15:38.293 00:15:38.293 Number of Queues 00:15:38.293 ================ 00:15:38.293 Number of I/O Submission Queues: 127 00:15:38.293 Number of I/O Completion Queues: 127 00:15:38.293 00:15:38.293 Active Namespaces 00:15:38.293 ================= 00:15:38.293 Namespace ID:1 00:15:38.294 Error Recovery Timeout: Unlimited 00:15:38.294 Command Set Identifier: NVM (00h) 00:15:38.294 Deallocate: Supported 00:15:38.294 Deallocated/Unwritten Error: Not Supported 00:15:38.294 Deallocated Read Value: Unknown 00:15:38.294 Deallocate in Write Zeroes: Not Supported 00:15:38.294 Deallocated Guard Field: 0xFFFF 00:15:38.294 Flush: Supported 00:15:38.294 Reservation: Supported 00:15:38.294 Namespace Sharing Capabilities: Multiple Controllers 00:15:38.294 Size (in LBAs): 131072 (0GiB) 00:15:38.294 Capacity (in LBAs): 131072 (0GiB) 00:15:38.294 Utilization (in LBAs): 131072 (0GiB) 00:15:38.294 NGUID: 83EFC3A4091A40568DFDD840F05991B1 00:15:38.294 UUID: 83efc3a4-091a-4056-8dfd-d840f05991b1 00:15:38.294 Thin Provisioning: Not Supported 00:15:38.294 Per-NS Atomic Units: Yes 00:15:38.294 Atomic Boundary Size (Normal): 0 00:15:38.294 Atomic Boundary Size (PFail): 0 00:15:38.294 Atomic Boundary Offset: 0 00:15:38.294 Maximum Single Source Range Length: 65535 00:15:38.294 Maximum Copy Length: 65535 00:15:38.294 Maximum Source Range Count: 1 00:15:38.294 NGUID/EUI64 Never Reused: No 00:15:38.294 Namespace Write Protected: No 00:15:38.294 Number of LBA Formats: 1 00:15:38.294 Current LBA Format: LBA Format #00 00:15:38.294 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:38.294 00:15:38.294 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:38.294 [2024-12-10 05:40:26.176254] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.555 Initializing NVMe Controllers 00:15:43.555 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:43.555 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:43.555 Initialization complete. Launching workers. 00:15:43.555 ======================================================== 00:15:43.555 Latency(us) 00:15:43.555 Device Information : IOPS MiB/s Average min max 00:15:43.555 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39896.60 155.85 3208.41 961.45 10328.93 00:15:43.555 ======================================================== 00:15:43.555 Total : 39896.60 155.85 3208.41 961.45 10328.93 00:15:43.555 00:15:43.555 [2024-12-10 05:40:31.277426] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.555 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:43.813 [2024-12-10 05:40:31.516150] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.147 Initializing NVMe Controllers 00:15:49.147 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:49.147 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:49.147 Initialization complete. Launching workers. 00:15:49.147 ======================================================== 00:15:49.147 Latency(us) 00:15:49.147 Device Information : IOPS MiB/s Average min max 00:15:49.147 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39967.18 156.12 3203.51 967.84 7998.20 00:15:49.147 ======================================================== 00:15:49.147 Total : 39967.18 156.12 3203.51 967.84 7998.20 00:15:49.147 00:15:49.147 [2024-12-10 05:40:36.537098] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.147 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:49.147 [2024-12-10 05:40:36.743389] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:54.411 [2024-12-10 05:40:41.881269] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:54.411 Initializing NVMe Controllers 00:15:54.411 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:54.411 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:54.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:54.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:54.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:54.411 Initialization complete. Launching workers. 00:15:54.411 Starting thread on core 2 00:15:54.411 Starting thread on core 3 00:15:54.411 Starting thread on core 1 00:15:54.411 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:54.411 [2024-12-10 05:40:42.181573] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.694 [2024-12-10 05:40:45.261282] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:57.694 Initializing NVMe Controllers 00:15:57.694 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.694 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.694 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:57.694 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:57.694 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:57.694 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:57.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:57.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:57.694 Initialization complete. Launching workers. 00:15:57.694 Starting thread on core 1 with urgent priority queue 00:15:57.694 Starting thread on core 2 with urgent priority queue 00:15:57.694 Starting thread on core 3 with urgent priority queue 00:15:57.694 Starting thread on core 0 with urgent priority queue 00:15:57.694 SPDK bdev Controller (SPDK2 ) core 0: 8473.67 IO/s 11.80 secs/100000 ios 00:15:57.694 SPDK bdev Controller (SPDK2 ) core 1: 9845.67 IO/s 10.16 secs/100000 ios 00:15:57.694 SPDK bdev Controller (SPDK2 ) core 2: 8394.67 IO/s 11.91 secs/100000 ios 00:15:57.694 SPDK bdev Controller (SPDK2 ) core 3: 8344.33 IO/s 11.98 secs/100000 ios 00:15:57.694 ======================================================== 00:15:57.694 00:15:57.694 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:57.694 [2024-12-10 05:40:45.543655] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.694 Initializing NVMe Controllers 00:15:57.694 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.694 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.694 Namespace ID: 1 size: 0GB 00:15:57.694 Initialization complete. 00:15:57.694 INFO: using host memory buffer for IO 00:15:57.694 Hello world! 00:15:57.694 [2024-12-10 05:40:45.553726] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:57.952 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:57.952 [2024-12-10 05:40:45.833847] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:59.326 Initializing NVMe Controllers 00:15:59.326 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.326 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.326 Initialization complete. Launching workers. 00:15:59.326 submit (in ns) avg, min, max = 7088.1, 3144.8, 4000615.2 00:15:59.326 complete (in ns) avg, min, max = 20891.2, 1718.1, 4002074.3 00:15:59.326 00:15:59.326 Submit histogram 00:15:59.326 ================ 00:15:59.326 Range in us Cumulative Count 00:15:59.326 3.139 - 3.154: 0.0248% ( 4) 00:15:59.326 3.154 - 3.170: 0.0434% ( 3) 00:15:59.326 3.170 - 3.185: 0.0806% ( 6) 00:15:59.326 3.185 - 3.200: 0.5334% ( 73) 00:15:59.326 3.200 - 3.215: 2.5800% ( 330) 00:15:59.326 3.215 - 3.230: 7.6780% ( 822) 00:15:59.326 3.230 - 3.246: 13.0489% ( 866) 00:15:59.326 3.246 - 3.261: 19.4059% ( 1025) 00:15:59.326 3.261 - 3.276: 27.3443% ( 1280) 00:15:59.326 3.276 - 3.291: 34.1665% ( 1100) 00:15:59.326 3.291 - 3.307: 39.5559% ( 869) 00:15:59.326 3.307 - 3.322: 44.4059% ( 782) 00:15:59.326 3.322 - 3.337: 49.1503% ( 765) 00:15:59.326 3.337 - 3.352: 53.2126% ( 655) 00:15:59.326 3.352 - 3.368: 58.1431% ( 795) 00:15:59.326 3.368 - 3.383: 64.9219% ( 1093) 00:15:59.326 3.383 - 3.398: 69.8090% ( 788) 00:15:59.326 3.398 - 3.413: 75.3287% ( 890) 00:15:59.326 3.413 - 3.429: 80.5817% ( 847) 00:15:59.326 3.429 - 3.444: 84.0858% ( 565) 00:15:59.326 3.444 - 3.459: 86.2813% ( 354) 00:15:59.326 3.459 - 3.474: 87.2612% ( 158) 00:15:59.326 3.474 - 3.490: 87.7698% ( 82) 00:15:59.326 3.490 - 3.505: 88.1729% ( 65) 00:15:59.326 3.505 - 3.520: 88.8799% ( 114) 00:15:59.326 3.520 - 3.535: 89.8536% ( 157) 00:15:59.326 3.535 - 3.550: 90.8025% ( 153) 00:15:59.326 3.550 - 3.566: 91.9127% ( 179) 00:15:59.326 3.566 - 3.581: 92.8926% ( 158) 00:15:59.326 3.581 - 3.596: 93.6988% ( 130) 00:15:59.326 3.596 - 3.611: 94.4307% ( 118) 00:15:59.326 3.611 - 3.627: 95.2121% ( 126) 00:15:59.326 3.627 - 3.642: 96.0556% ( 136) 00:15:59.326 3.642 - 3.657: 96.8122% ( 122) 00:15:59.326 3.657 - 3.672: 97.5130% ( 113) 00:15:59.326 3.672 - 3.688: 98.0154% ( 81) 00:15:59.326 3.688 - 3.703: 98.3503% ( 54) 00:15:59.326 3.703 - 3.718: 98.6294% ( 45) 00:15:59.326 3.718 - 3.733: 98.9333% ( 49) 00:15:59.326 3.733 - 3.749: 99.1875% ( 41) 00:15:59.326 3.749 - 3.764: 99.2992% ( 18) 00:15:59.326 3.764 - 3.779: 99.4480% ( 24) 00:15:59.326 3.779 - 3.794: 99.4914% ( 7) 00:15:59.326 3.794 - 3.810: 99.5162% ( 4) 00:15:59.326 3.825 - 3.840: 99.5411% ( 4) 00:15:59.326 3.855 - 3.870: 99.5473% ( 1) 00:15:59.326 3.870 - 3.886: 99.5659% ( 3) 00:15:59.326 3.901 - 3.931: 99.5721% ( 1) 00:15:59.326 3.931 - 3.962: 99.5783% ( 1) 00:15:59.326 3.962 - 3.992: 99.5845% ( 1) 00:15:59.326 3.992 - 4.023: 99.5969% ( 2) 00:15:59.326 4.084 - 4.114: 99.6155% ( 3) 00:15:59.326 4.114 - 4.145: 99.6217% ( 1) 00:15:59.326 4.175 - 4.206: 99.6279% ( 1) 00:15:59.326 4.907 - 4.937: 99.6341% ( 1) 00:15:59.326 5.211 - 5.242: 99.6403% ( 1) 00:15:59.326 5.303 - 5.333: 99.6465% ( 1) 00:15:59.326 5.425 - 5.455: 99.6527% ( 1) 00:15:59.326 5.516 - 5.547: 99.6589% ( 1) 00:15:59.326 5.547 - 5.577: 99.6651% ( 1) 00:15:59.326 5.638 - 5.669: 99.6713% ( 1) 00:15:59.326 6.065 - 6.095: 99.6775% ( 1) 00:15:59.326 6.156 - 6.187: 99.6837% ( 1) 00:15:59.326 6.248 - 6.278: 99.6899% ( 1) 00:15:59.326 6.522 - 6.552: 99.6961% ( 1) 00:15:59.326 6.552 - 6.583: 99.7023% ( 1) 00:15:59.326 6.674 - 6.705: 99.7085% ( 1) 00:15:59.326 6.766 - 6.796: 99.7147% ( 1) 00:15:59.326 6.796 - 6.827: 99.7209% ( 1) 00:15:59.326 6.827 - 6.857: 99.7333% ( 2) 00:15:59.326 6.918 - 6.949: 99.7457% ( 2) 00:15:59.326 6.949 - 6.979: 99.7581% ( 2) 00:15:59.326 7.040 - 7.070: 99.7643% ( 1) 00:15:59.326 7.192 - 7.223: 99.7705% ( 1) 00:15:59.326 7.253 - 7.284: 99.7767% ( 1) 00:15:59.326 7.284 - 7.314: 99.7829% ( 1) 00:15:59.326 7.375 - 7.406: 99.7891% ( 1) 00:15:59.326 7.467 - 7.497: 99.8015% ( 2) 00:15:59.326 7.497 - 7.528: 99.8077% ( 1) 00:15:59.326 [2024-12-10 05:40:46.942177] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:59.326 7.619 - 7.650: 99.8139% ( 1) 00:15:59.326 7.650 - 7.680: 99.8201% ( 1) 00:15:59.326 7.680 - 7.710: 99.8263% ( 1) 00:15:59.326 7.741 - 7.771: 99.8325% ( 1) 00:15:59.326 7.771 - 7.802: 99.8387% ( 1) 00:15:59.326 7.863 - 7.924: 99.8450% ( 1) 00:15:59.326 8.168 - 8.229: 99.8512% ( 1) 00:15:59.326 8.229 - 8.290: 99.8574% ( 1) 00:15:59.326 8.350 - 8.411: 99.8636% ( 1) 00:15:59.326 8.411 - 8.472: 99.8698% ( 1) 00:15:59.326 8.533 - 8.594: 99.8760% ( 1) 00:15:59.326 8.716 - 8.777: 99.8822% ( 1) 00:15:59.326 9.326 - 9.387: 99.8884% ( 1) 00:15:59.326 19.017 - 19.139: 99.8946% ( 1) 00:15:59.326 19.383 - 19.505: 99.9008% ( 1) 00:15:59.326 20.358 - 20.480: 99.9070% ( 1) 00:15:59.326 3994.575 - 4025.783: 100.0000% ( 15) 00:15:59.326 00:15:59.326 Complete histogram 00:15:59.326 ================== 00:15:59.326 Range in us Cumulative Count 00:15:59.326 1.714 - 1.722: 0.0186% ( 3) 00:15:59.326 1.722 - 1.730: 0.0806% ( 10) 00:15:59.326 1.730 - 1.737: 0.1240% ( 7) 00:15:59.326 1.737 - 1.745: 0.1302% ( 1) 00:15:59.326 1.745 - 1.752: 0.1364% ( 1) 00:15:59.326 1.752 - 1.760: 0.1737% ( 6) 00:15:59.326 1.760 - 1.768: 1.0357% ( 139) 00:15:59.326 1.768 - 1.775: 8.8129% ( 1254) 00:15:59.326 1.775 - 1.783: 25.9241% ( 2759) 00:15:59.326 1.783 - 1.790: 37.2426% ( 1825) 00:15:59.326 1.790 - 1.798: 40.3746% ( 505) 00:15:59.326 1.798 - 1.806: 42.6383% ( 365) 00:15:59.326 1.806 - 1.813: 48.5239% ( 949) 00:15:59.326 1.813 - 1.821: 65.2878% ( 2703) 00:15:59.326 1.821 - 1.829: 83.1555% ( 2881) 00:15:59.326 1.829 - 1.836: 91.4537% ( 1338) 00:15:59.326 1.836 - 1.844: 94.3190% ( 462) 00:15:59.326 1.844 - 1.851: 95.8323% ( 244) 00:15:59.326 1.851 - 1.859: 96.7502% ( 148) 00:15:59.326 1.859 - 1.867: 97.3456% ( 96) 00:15:59.326 1.867 - 1.874: 97.6247% ( 45) 00:15:59.326 1.874 - 1.882: 97.8913% ( 43) 00:15:59.326 1.882 - 1.890: 98.2138% ( 52) 00:15:59.326 1.890 - 1.897: 98.5798% ( 59) 00:15:59.326 1.897 - 1.905: 98.8340% ( 41) 00:15:59.326 1.905 - 1.912: 99.0573% ( 36) 00:15:59.326 1.912 - 1.920: 99.1069% ( 8) 00:15:59.326 1.920 - 1.928: 99.1627% ( 9) 00:15:59.326 1.928 - 1.935: 99.2062% ( 7) 00:15:59.326 1.935 - 1.943: 99.2186% ( 2) 00:15:59.326 1.943 - 1.950: 99.2496% ( 5) 00:15:59.326 1.950 - 1.966: 99.2682% ( 3) 00:15:59.326 1.966 - 1.981: 99.2744% ( 1) 00:15:59.326 1.996 - 2.011: 99.2806% ( 1) 00:15:59.326 2.027 - 2.042: 99.2868% ( 1) 00:15:59.326 2.042 - 2.057: 99.2992% ( 2) 00:15:59.326 2.057 - 2.072: 99.3178% ( 3) 00:15:59.326 2.072 - 2.088: 99.3488% ( 5) 00:15:59.326 2.088 - 2.103: 99.3550% ( 1) 00:15:59.326 2.103 - 2.118: 99.3612% ( 1) 00:15:59.326 2.164 - 2.179: 99.3674% ( 1) 00:15:59.326 2.225 - 2.240: 99.3736% ( 1) 00:15:59.326 2.362 - 2.377: 99.3798% ( 1) 00:15:59.326 2.499 - 2.514: 99.3860% ( 1) 00:15:59.326 3.749 - 3.764: 99.3922% ( 1) 00:15:59.326 3.870 - 3.886: 99.3984% ( 1) 00:15:59.326 3.901 - 3.931: 99.4046% ( 1) 00:15:59.326 4.419 - 4.450: 99.4108% ( 1) 00:15:59.326 4.693 - 4.724: 99.4170% ( 1) 00:15:59.326 4.968 - 4.998: 99.4232% ( 1) 00:15:59.326 4.998 - 5.029: 99.4294% ( 1) 00:15:59.326 5.059 - 5.090: 99.4356% ( 1) 00:15:59.326 5.181 - 5.211: 99.4418% ( 1) 00:15:59.326 5.333 - 5.364: 99.4480% ( 1) 00:15:59.326 5.394 - 5.425: 99.4542% ( 1) 00:15:59.326 5.455 - 5.486: 99.4604% ( 1) 00:15:59.326 5.486 - 5.516: 99.4728% ( 2) 00:15:59.326 5.790 - 5.821: 99.4790% ( 1) 00:15:59.326 5.943 - 5.973: 99.4914% ( 2) 00:15:59.326 6.065 - 6.095: 99.4976% ( 1) 00:15:59.326 6.370 - 6.400: 99.5100% ( 2) 00:15:59.326 14.629 - 14.690: 99.5162% ( 1) 00:15:59.326 17.798 - 17.920: 99.5225% ( 1) 00:15:59.326 3994.575 - 4025.783: 100.0000% ( 77) 00:15:59.326 00:15:59.326 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:59.327 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:59.327 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:59.327 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:59.327 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:59.327 [ 00:15:59.327 { 00:15:59.327 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:59.327 "subtype": "Discovery", 00:15:59.327 "listen_addresses": [], 00:15:59.327 "allow_any_host": true, 00:15:59.327 "hosts": [] 00:15:59.327 }, 00:15:59.327 { 00:15:59.327 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:59.327 "subtype": "NVMe", 00:15:59.327 "listen_addresses": [ 00:15:59.327 { 00:15:59.327 "trtype": "VFIOUSER", 00:15:59.327 "adrfam": "IPv4", 00:15:59.327 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:59.327 "trsvcid": "0" 00:15:59.327 } 00:15:59.327 ], 00:15:59.327 "allow_any_host": true, 00:15:59.327 "hosts": [], 00:15:59.327 "serial_number": "SPDK1", 00:15:59.327 "model_number": "SPDK bdev Controller", 00:15:59.327 "max_namespaces": 32, 00:15:59.327 "min_cntlid": 1, 00:15:59.327 "max_cntlid": 65519, 00:15:59.327 "namespaces": [ 00:15:59.327 { 00:15:59.327 "nsid": 1, 00:15:59.327 "bdev_name": "Malloc1", 00:15:59.327 "name": "Malloc1", 00:15:59.327 "nguid": "2A2A4EC2D207440E8EC12F8D73E43864", 00:15:59.327 "uuid": "2a2a4ec2-d207-440e-8ec1-2f8d73e43864" 00:15:59.327 }, 00:15:59.327 { 00:15:59.327 "nsid": 2, 00:15:59.327 "bdev_name": "Malloc3", 00:15:59.327 "name": "Malloc3", 00:15:59.327 "nguid": "A2A51AEC520B48B382A804F60C3000D9", 00:15:59.327 "uuid": "a2a51aec-520b-48b3-82a8-04f60c3000d9" 00:15:59.327 } 00:15:59.327 ] 00:15:59.327 }, 00:15:59.327 { 00:15:59.327 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:59.327 "subtype": "NVMe", 00:15:59.327 "listen_addresses": [ 00:15:59.327 { 00:15:59.327 "trtype": "VFIOUSER", 00:15:59.327 "adrfam": "IPv4", 00:15:59.327 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:59.327 "trsvcid": "0" 00:15:59.327 } 00:15:59.327 ], 00:15:59.327 "allow_any_host": true, 00:15:59.327 "hosts": [], 00:15:59.327 "serial_number": "SPDK2", 00:15:59.327 "model_number": "SPDK bdev Controller", 00:15:59.327 "max_namespaces": 32, 00:15:59.327 "min_cntlid": 1, 00:15:59.327 "max_cntlid": 65519, 00:15:59.327 "namespaces": [ 00:15:59.327 { 00:15:59.327 "nsid": 1, 00:15:59.327 "bdev_name": "Malloc2", 00:15:59.327 "name": "Malloc2", 00:15:59.327 "nguid": "83EFC3A4091A40568DFDD840F05991B1", 00:15:59.327 "uuid": "83efc3a4-091a-4056-8dfd-d840f05991b1" 00:15:59.327 } 00:15:59.327 ] 00:15:59.327 } 00:15:59.327 ] 00:15:59.327 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:59.327 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1162264 00:15:59.327 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:59.327 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:59.327 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:59.327 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:59.327 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:59.327 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:59.327 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:59.327 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:59.585 [2024-12-10 05:40:47.340579] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:59.585 Malloc4 00:15:59.585 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:59.843 [2024-12-10 05:40:47.558364] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:59.843 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:59.843 Asynchronous Event Request test 00:15:59.843 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.843 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.843 Registering asynchronous event callbacks... 00:15:59.843 Starting namespace attribute notice tests for all controllers... 00:15:59.843 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:59.843 aer_cb - Changed Namespace 00:15:59.843 Cleaning up... 00:16:00.100 [ 00:16:00.100 { 00:16:00.100 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:00.100 "subtype": "Discovery", 00:16:00.100 "listen_addresses": [], 00:16:00.100 "allow_any_host": true, 00:16:00.100 "hosts": [] 00:16:00.100 }, 00:16:00.100 { 00:16:00.100 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:00.100 "subtype": "NVMe", 00:16:00.100 "listen_addresses": [ 00:16:00.100 { 00:16:00.100 "trtype": "VFIOUSER", 00:16:00.100 "adrfam": "IPv4", 00:16:00.100 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:00.100 "trsvcid": "0" 00:16:00.100 } 00:16:00.100 ], 00:16:00.100 "allow_any_host": true, 00:16:00.100 "hosts": [], 00:16:00.100 "serial_number": "SPDK1", 00:16:00.101 "model_number": "SPDK bdev Controller", 00:16:00.101 "max_namespaces": 32, 00:16:00.101 "min_cntlid": 1, 00:16:00.101 "max_cntlid": 65519, 00:16:00.101 "namespaces": [ 00:16:00.101 { 00:16:00.101 "nsid": 1, 00:16:00.101 "bdev_name": "Malloc1", 00:16:00.101 "name": "Malloc1", 00:16:00.101 "nguid": "2A2A4EC2D207440E8EC12F8D73E43864", 00:16:00.101 "uuid": "2a2a4ec2-d207-440e-8ec1-2f8d73e43864" 00:16:00.101 }, 00:16:00.101 { 00:16:00.101 "nsid": 2, 00:16:00.101 "bdev_name": "Malloc3", 00:16:00.101 "name": "Malloc3", 00:16:00.101 "nguid": "A2A51AEC520B48B382A804F60C3000D9", 00:16:00.101 "uuid": "a2a51aec-520b-48b3-82a8-04f60c3000d9" 00:16:00.101 } 00:16:00.101 ] 00:16:00.101 }, 00:16:00.101 { 00:16:00.101 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:00.101 "subtype": "NVMe", 00:16:00.101 "listen_addresses": [ 00:16:00.101 { 00:16:00.101 "trtype": "VFIOUSER", 00:16:00.101 "adrfam": "IPv4", 00:16:00.101 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:00.101 "trsvcid": "0" 00:16:00.101 } 00:16:00.101 ], 00:16:00.101 "allow_any_host": true, 00:16:00.101 "hosts": [], 00:16:00.101 "serial_number": "SPDK2", 00:16:00.101 "model_number": "SPDK bdev Controller", 00:16:00.101 "max_namespaces": 32, 00:16:00.101 "min_cntlid": 1, 00:16:00.101 "max_cntlid": 65519, 00:16:00.101 "namespaces": [ 00:16:00.101 { 00:16:00.101 "nsid": 1, 00:16:00.101 "bdev_name": "Malloc2", 00:16:00.101 "name": "Malloc2", 00:16:00.101 "nguid": "83EFC3A4091A40568DFDD840F05991B1", 00:16:00.101 "uuid": "83efc3a4-091a-4056-8dfd-d840f05991b1" 00:16:00.101 }, 00:16:00.101 { 00:16:00.101 "nsid": 2, 00:16:00.101 "bdev_name": "Malloc4", 00:16:00.101 "name": "Malloc4", 00:16:00.101 "nguid": "4D31B743F5114EE4B54727F97741ABE5", 00:16:00.101 "uuid": "4d31b743-f511-4ee4-b547-27f97741abe5" 00:16:00.101 } 00:16:00.101 ] 00:16:00.101 } 00:16:00.101 ] 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1162264 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1154629 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1154629 ']' 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1154629 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1154629 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1154629' 00:16:00.101 killing process with pid 1154629 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1154629 00:16:00.101 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1154629 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1162490 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1162490' 00:16:00.359 Process pid: 1162490 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1162490 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1162490 ']' 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.359 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.360 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.360 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.360 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:00.360 [2024-12-10 05:40:48.135286] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:00.360 [2024-12-10 05:40:48.136155] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:16:00.360 [2024-12-10 05:40:48.136201] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.360 [2024-12-10 05:40:48.212031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.360 [2024-12-10 05:40:48.247548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.360 [2024-12-10 05:40:48.247596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.360 [2024-12-10 05:40:48.247604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.360 [2024-12-10 05:40:48.247610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.360 [2024-12-10 05:40:48.247616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.360 [2024-12-10 05:40:48.249022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.360 [2024-12-10 05:40:48.249134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.360 [2024-12-10 05:40:48.249241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.360 [2024-12-10 05:40:48.249242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.619 [2024-12-10 05:40:48.317781] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:00.619 [2024-12-10 05:40:48.318651] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:00.619 [2024-12-10 05:40:48.318849] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:00.619 [2024-12-10 05:40:48.319281] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:00.619 [2024-12-10 05:40:48.319312] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:00.619 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.619 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:00.619 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:01.556 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:01.814 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:01.814 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:01.815 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:01.815 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:01.815 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:02.074 Malloc1 00:16:02.074 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:02.332 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:02.332 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:02.590 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:02.590 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:02.591 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:02.848 Malloc2 00:16:02.848 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:03.106 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:03.364 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:03.364 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:03.364 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1162490 00:16:03.364 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1162490 ']' 00:16:03.364 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1162490 00:16:03.364 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:03.364 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.364 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1162490 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1162490' 00:16:03.623 killing process with pid 1162490 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1162490 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1162490 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:03.623 00:16:03.623 real 0m50.848s 00:16:03.623 user 3m16.704s 00:16:03.623 sys 0m3.186s 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:03.623 ************************************ 00:16:03.623 END TEST nvmf_vfio_user 00:16:03.623 ************************************ 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.623 05:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:03.882 ************************************ 00:16:03.882 START TEST nvmf_vfio_user_nvme_compliance 00:16:03.882 ************************************ 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:03.882 * Looking for test storage... 00:16:03.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:03.882 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:03.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.883 --rc genhtml_branch_coverage=1 00:16:03.883 --rc genhtml_function_coverage=1 00:16:03.883 --rc genhtml_legend=1 00:16:03.883 --rc geninfo_all_blocks=1 00:16:03.883 --rc geninfo_unexecuted_blocks=1 00:16:03.883 00:16:03.883 ' 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:03.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.883 --rc genhtml_branch_coverage=1 00:16:03.883 --rc genhtml_function_coverage=1 00:16:03.883 --rc genhtml_legend=1 00:16:03.883 --rc geninfo_all_blocks=1 00:16:03.883 --rc geninfo_unexecuted_blocks=1 00:16:03.883 00:16:03.883 ' 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:03.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.883 --rc genhtml_branch_coverage=1 00:16:03.883 --rc genhtml_function_coverage=1 00:16:03.883 --rc genhtml_legend=1 00:16:03.883 --rc geninfo_all_blocks=1 00:16:03.883 --rc geninfo_unexecuted_blocks=1 00:16:03.883 00:16:03.883 ' 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:03.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.883 --rc genhtml_branch_coverage=1 00:16:03.883 --rc genhtml_function_coverage=1 00:16:03.883 --rc genhtml_legend=1 00:16:03.883 --rc geninfo_all_blocks=1 00:16:03.883 --rc geninfo_unexecuted_blocks=1 00:16:03.883 00:16:03.883 ' 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:03.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1163067 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1163067' 00:16:03.883 Process pid: 1163067 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1163067 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1163067 ']' 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.883 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.884 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.884 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.884 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.142 [2024-12-10 05:40:51.796462] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:16:04.142 [2024-12-10 05:40:51.796511] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.142 [2024-12-10 05:40:51.869204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.142 [2024-12-10 05:40:51.906820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.142 [2024-12-10 05:40:51.906861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.142 [2024-12-10 05:40:51.906868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.142 [2024-12-10 05:40:51.906874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.142 [2024-12-10 05:40:51.906879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.142 [2024-12-10 05:40:51.908158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.142 [2024-12-10 05:40:51.908269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.142 [2024-12-10 05:40:51.908270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.142 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.142 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:04.142 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.514 malloc0 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.514 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:05.514 00:16:05.514 00:16:05.514 CUnit - A unit testing framework for C - Version 2.1-3 00:16:05.514 http://cunit.sourceforge.net/ 00:16:05.514 00:16:05.514 00:16:05.514 Suite: nvme_compliance 00:16:05.514 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 05:40:53.244624] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.514 [2024-12-10 05:40:53.245952] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:05.514 [2024-12-10 05:40:53.245969] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:05.514 [2024-12-10 05:40:53.245975] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:05.514 [2024-12-10 05:40:53.247641] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.514 passed 00:16:05.514 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 05:40:53.326199] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.514 [2024-12-10 05:40:53.329221] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.514 passed 00:16:05.772 Test: admin_identify_ns ...[2024-12-10 05:40:53.413356] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.772 [2024-12-10 05:40:53.474195] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:05.772 [2024-12-10 05:40:53.482185] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:05.772 [2024-12-10 05:40:53.503261] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.772 passed 00:16:05.772 Test: admin_get_features_mandatory_features ...[2024-12-10 05:40:53.577776] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.772 [2024-12-10 05:40:53.581797] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.772 passed 00:16:05.772 Test: admin_get_features_optional_features ...[2024-12-10 05:40:53.657298] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.772 [2024-12-10 05:40:53.660322] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.030 passed 00:16:06.030 Test: admin_set_features_number_of_queues ...[2024-12-10 05:40:53.736016] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.030 [2024-12-10 05:40:53.841254] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.030 passed 00:16:06.030 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 05:40:53.918029] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.030 [2024-12-10 05:40:53.921055] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.288 passed 00:16:06.288 Test: admin_get_log_page_with_lpo ...[2024-12-10 05:40:53.996331] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.288 [2024-12-10 05:40:54.065175] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:06.288 [2024-12-10 05:40:54.078236] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.288 passed 00:16:06.288 Test: fabric_property_get ...[2024-12-10 05:40:54.153600] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.288 [2024-12-10 05:40:54.154841] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:06.288 [2024-12-10 05:40:54.156620] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.546 passed 00:16:06.546 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 05:40:54.236148] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.546 [2024-12-10 05:40:54.237380] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:06.546 [2024-12-10 05:40:54.239169] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.546 passed 00:16:06.546 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 05:40:54.315339] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.546 [2024-12-10 05:40:54.402171] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:06.546 [2024-12-10 05:40:54.418175] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:06.546 [2024-12-10 05:40:54.423250] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.804 passed 00:16:06.804 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 05:40:54.497009] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.804 [2024-12-10 05:40:54.498242] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:06.804 [2024-12-10 05:40:54.500034] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.804 passed 00:16:06.804 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 05:40:54.576690] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.804 [2024-12-10 05:40:54.652174] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:06.804 [2024-12-10 05:40:54.676175] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:06.804 [2024-12-10 05:40:54.681247] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.062 passed 00:16:07.062 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 05:40:54.756814] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.062 [2024-12-10 05:40:54.758049] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:07.062 [2024-12-10 05:40:54.758075] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:07.062 [2024-12-10 05:40:54.759830] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.062 passed 00:16:07.062 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 05:40:54.839521] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.062 [2024-12-10 05:40:54.931173] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:07.062 [2024-12-10 05:40:54.939170] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:07.062 [2024-12-10 05:40:54.947181] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:07.320 [2024-12-10 05:40:54.955193] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:07.320 [2024-12-10 05:40:54.984269] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.320 passed 00:16:07.320 Test: admin_create_io_sq_verify_pc ...[2024-12-10 05:40:55.059819] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.320 [2024-12-10 05:40:55.080180] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:07.320 [2024-12-10 05:40:55.098008] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.320 passed 00:16:07.320 Test: admin_create_io_qp_max_qps ...[2024-12-10 05:40:55.175532] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.693 [2024-12-10 05:40:56.294177] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:08.950 [2024-12-10 05:40:56.679727] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.950 passed 00:16:08.950 Test: admin_create_io_sq_shared_cq ...[2024-12-10 05:40:56.756547] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:09.209 [2024-12-10 05:40:56.892172] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:09.209 [2024-12-10 05:40:56.929235] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:09.209 passed 00:16:09.209 00:16:09.209 Run Summary: Type Total Ran Passed Failed Inactive 00:16:09.209 suites 1 1 n/a 0 0 00:16:09.209 tests 18 18 18 0 0 00:16:09.209 asserts 360 360 360 0 n/a 00:16:09.209 00:16:09.209 Elapsed time = 1.517 seconds 00:16:09.209 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1163067 00:16:09.209 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1163067 ']' 00:16:09.209 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1163067 00:16:09.209 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:09.209 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.209 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1163067 00:16:09.209 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.209 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.209 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1163067' 00:16:09.209 killing process with pid 1163067 00:16:09.209 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1163067 00:16:09.209 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1163067 00:16:09.468 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:09.468 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:09.468 00:16:09.468 real 0m5.678s 00:16:09.468 user 0m15.935s 00:16:09.468 sys 0m0.512s 00:16:09.468 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.468 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:09.468 ************************************ 00:16:09.468 END TEST nvmf_vfio_user_nvme_compliance 00:16:09.468 ************************************ 00:16:09.468 05:40:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:09.468 05:40:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:09.468 05:40:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.468 05:40:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:09.468 ************************************ 00:16:09.468 START TEST nvmf_vfio_user_fuzz 00:16:09.468 ************************************ 00:16:09.468 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:09.727 * Looking for test storage... 00:16:09.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.727 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:09.727 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:16:09.727 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:09.727 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:09.727 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.727 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.727 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:09.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.728 --rc genhtml_branch_coverage=1 00:16:09.728 --rc genhtml_function_coverage=1 00:16:09.728 --rc genhtml_legend=1 00:16:09.728 --rc geninfo_all_blocks=1 00:16:09.728 --rc geninfo_unexecuted_blocks=1 00:16:09.728 00:16:09.728 ' 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:09.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.728 --rc genhtml_branch_coverage=1 00:16:09.728 --rc genhtml_function_coverage=1 00:16:09.728 --rc genhtml_legend=1 00:16:09.728 --rc geninfo_all_blocks=1 00:16:09.728 --rc geninfo_unexecuted_blocks=1 00:16:09.728 00:16:09.728 ' 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:09.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.728 --rc genhtml_branch_coverage=1 00:16:09.728 --rc genhtml_function_coverage=1 00:16:09.728 --rc genhtml_legend=1 00:16:09.728 --rc geninfo_all_blocks=1 00:16:09.728 --rc geninfo_unexecuted_blocks=1 00:16:09.728 00:16:09.728 ' 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:09.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.728 --rc genhtml_branch_coverage=1 00:16:09.728 --rc genhtml_function_coverage=1 00:16:09.728 --rc genhtml_legend=1 00:16:09.728 --rc geninfo_all_blocks=1 00:16:09.728 --rc geninfo_unexecuted_blocks=1 00:16:09.728 00:16:09.728 ' 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:09.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:09.728 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1164169 00:16:09.729 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1164169' 00:16:09.729 Process pid: 1164169 00:16:09.729 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:09.729 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1164169 00:16:09.729 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:09.729 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1164169 ']' 00:16:09.729 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.729 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.729 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.729 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.729 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:09.987 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.987 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:09.987 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.921 malloc0 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.921 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:11.179 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.179 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:11.179 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.179 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:11.179 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.179 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:11.179 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:43.254 Fuzzing completed. Shutting down the fuzz application 00:16:43.254 00:16:43.254 Dumping successful admin opcodes: 00:16:43.254 9, 10, 00:16:43.254 Dumping successful io opcodes: 00:16:43.254 0, 00:16:43.254 NS: 0x20000081ef00 I/O qp, Total commands completed: 1018324, total successful commands: 4001, random_seed: 480104256 00:16:43.254 NS: 0x20000081ef00 admin qp, Total commands completed: 251472, total successful commands: 59, random_seed: 1549367808 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1164169 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1164169 ']' 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1164169 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1164169 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1164169' 00:16:43.254 killing process with pid 1164169 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1164169 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1164169 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:43.254 00:16:43.254 real 0m32.206s 00:16:43.254 user 0m29.676s 00:16:43.254 sys 0m31.433s 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:43.254 ************************************ 00:16:43.254 END TEST nvmf_vfio_user_fuzz 00:16:43.254 ************************************ 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:43.254 ************************************ 00:16:43.254 START TEST nvmf_auth_target 00:16:43.254 ************************************ 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:43.254 * Looking for test storage... 00:16:43.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:43.254 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:43.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.255 --rc genhtml_branch_coverage=1 00:16:43.255 --rc genhtml_function_coverage=1 00:16:43.255 --rc genhtml_legend=1 00:16:43.255 --rc geninfo_all_blocks=1 00:16:43.255 --rc geninfo_unexecuted_blocks=1 00:16:43.255 00:16:43.255 ' 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:43.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.255 --rc genhtml_branch_coverage=1 00:16:43.255 --rc genhtml_function_coverage=1 00:16:43.255 --rc genhtml_legend=1 00:16:43.255 --rc geninfo_all_blocks=1 00:16:43.255 --rc geninfo_unexecuted_blocks=1 00:16:43.255 00:16:43.255 ' 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:43.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.255 --rc genhtml_branch_coverage=1 00:16:43.255 --rc genhtml_function_coverage=1 00:16:43.255 --rc genhtml_legend=1 00:16:43.255 --rc geninfo_all_blocks=1 00:16:43.255 --rc geninfo_unexecuted_blocks=1 00:16:43.255 00:16:43.255 ' 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:43.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.255 --rc genhtml_branch_coverage=1 00:16:43.255 --rc genhtml_function_coverage=1 00:16:43.255 --rc genhtml_legend=1 00:16:43.255 --rc geninfo_all_blocks=1 00:16:43.255 --rc geninfo_unexecuted_blocks=1 00:16:43.255 00:16:43.255 ' 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:43.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:43.255 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:43.256 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.571 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:47.571 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:47.571 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:47.571 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:47.571 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:47.571 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:47.571 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:47.571 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:47.572 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:47.572 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:47.572 Found net devices under 0000:af:00.0: cvl_0_0 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:47.572 Found net devices under 0000:af:00.1: cvl_0_1 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:47.572 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.830 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:47.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:16:47.831 00:16:47.831 --- 10.0.0.2 ping statistics --- 00:16:47.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.831 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:16:47.831 00:16:47.831 --- 10.0.0.1 ping statistics --- 00:16:47.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.831 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1172380 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1172380 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1172380 ']' 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.831 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.398 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.398 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:48.398 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:48.398 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:48.398 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1172541 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4a487ab5f469be0de3f671dee25f06013f66a2c164447a67 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.A3M 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4a487ab5f469be0de3f671dee25f06013f66a2c164447a67 0 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4a487ab5f469be0de3f671dee25f06013f66a2c164447a67 0 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4a487ab5f469be0de3f671dee25f06013f66a2c164447a67 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.A3M 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.A3M 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.A3M 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4ee2b29b6d40642d7fc8fa4ba8748e1bc3f31b1af2dafda310a2cd343778b7d7 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.UQF 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4ee2b29b6d40642d7fc8fa4ba8748e1bc3f31b1af2dafda310a2cd343778b7d7 3 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4ee2b29b6d40642d7fc8fa4ba8748e1bc3f31b1af2dafda310a2cd343778b7d7 3 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4ee2b29b6d40642d7fc8fa4ba8748e1bc3f31b1af2dafda310a2cd343778b7d7 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.UQF 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.UQF 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.UQF 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c16bd5f1130e1264a3a8547791b7aa82 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.sMg 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c16bd5f1130e1264a3a8547791b7aa82 1 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c16bd5f1130e1264a3a8547791b7aa82 1 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c16bd5f1130e1264a3a8547791b7aa82 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:48.398 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.sMg 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.sMg 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.sMg 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0024e7393418e60a62a039feec70d0f2560520abd21c6476 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.JcF 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0024e7393418e60a62a039feec70d0f2560520abd21c6476 2 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0024e7393418e60a62a039feec70d0f2560520abd21c6476 2 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0024e7393418e60a62a039feec70d0f2560520abd21c6476 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.JcF 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.JcF 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.JcF 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=952486f2e143ebf63d1def57bffabc086dd4a7e33e3fe838 00:16:48.399 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.MXA 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 952486f2e143ebf63d1def57bffabc086dd4a7e33e3fe838 2 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 952486f2e143ebf63d1def57bffabc086dd4a7e33e3fe838 2 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=952486f2e143ebf63d1def57bffabc086dd4a7e33e3fe838 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.MXA 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.MXA 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.MXA 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1b605b887cf5f4a2153501af17c02d7c 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.CzX 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1b605b887cf5f4a2153501af17c02d7c 1 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1b605b887cf5f4a2153501af17c02d7c 1 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1b605b887cf5f4a2153501af17c02d7c 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.CzX 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.CzX 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.CzX 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5f4d7fdfa71a12697232c09d24fa103a881b5c8302847bca05424baee0e15711 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.YrL 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5f4d7fdfa71a12697232c09d24fa103a881b5c8302847bca05424baee0e15711 3 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5f4d7fdfa71a12697232c09d24fa103a881b5c8302847bca05424baee0e15711 3 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5f4d7fdfa71a12697232c09d24fa103a881b5c8302847bca05424baee0e15711 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.YrL 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.YrL 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.YrL 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1172380 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1172380 ']' 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.658 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.917 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:48.917 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1172541 /var/tmp/host.sock 00:16:48.917 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1172541 ']' 00:16:48.917 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:48.917 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.917 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:48.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:48.917 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.917 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.A3M 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.A3M 00:16:49.175 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.A3M 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.UQF ]] 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UQF 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UQF 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UQF 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.sMg 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.sMg 00:16:49.434 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.sMg 00:16:49.692 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.JcF ]] 00:16:49.692 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JcF 00:16:49.692 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.692 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.692 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.692 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JcF 00:16:49.692 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JcF 00:16:49.951 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:49.951 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.MXA 00:16:49.951 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.951 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.951 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.951 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.MXA 00:16:49.951 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.MXA 00:16:50.209 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.CzX ]] 00:16:50.209 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CzX 00:16:50.209 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.209 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.209 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.209 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CzX 00:16:50.209 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CzX 00:16:50.209 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:50.209 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.YrL 00:16:50.209 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.209 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.209 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.209 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.YrL 00:16:50.209 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.YrL 00:16:50.467 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:50.467 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:50.467 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.467 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.467 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:50.467 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.725 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.983 00:16:50.984 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.984 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.984 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.242 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.242 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.242 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.242 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.242 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.242 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.242 { 00:16:51.242 "cntlid": 1, 00:16:51.242 "qid": 0, 00:16:51.242 "state": "enabled", 00:16:51.242 "thread": "nvmf_tgt_poll_group_000", 00:16:51.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:51.242 "listen_address": { 00:16:51.242 "trtype": "TCP", 00:16:51.242 "adrfam": "IPv4", 00:16:51.242 "traddr": "10.0.0.2", 00:16:51.242 "trsvcid": "4420" 00:16:51.242 }, 00:16:51.242 "peer_address": { 00:16:51.242 "trtype": "TCP", 00:16:51.242 "adrfam": "IPv4", 00:16:51.242 "traddr": "10.0.0.1", 00:16:51.242 "trsvcid": "58274" 00:16:51.242 }, 00:16:51.242 "auth": { 00:16:51.242 "state": "completed", 00:16:51.242 "digest": "sha256", 00:16:51.242 "dhgroup": "null" 00:16:51.242 } 00:16:51.242 } 00:16:51.242 ]' 00:16:51.242 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.242 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.242 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.242 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:51.242 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.242 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.242 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.242 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.500 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:16:51.500 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:16:52.067 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.067 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:52.067 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.067 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.067 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.067 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.067 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.067 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.325 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.583 00:16:52.583 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.583 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.583 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.583 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.583 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.583 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.583 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.842 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.842 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.842 { 00:16:52.842 "cntlid": 3, 00:16:52.842 "qid": 0, 00:16:52.842 "state": "enabled", 00:16:52.842 "thread": "nvmf_tgt_poll_group_000", 00:16:52.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:52.842 "listen_address": { 00:16:52.842 "trtype": "TCP", 00:16:52.842 "adrfam": "IPv4", 00:16:52.842 "traddr": "10.0.0.2", 00:16:52.842 "trsvcid": "4420" 00:16:52.842 }, 00:16:52.842 "peer_address": { 00:16:52.842 "trtype": "TCP", 00:16:52.842 "adrfam": "IPv4", 00:16:52.842 "traddr": "10.0.0.1", 00:16:52.842 "trsvcid": "58308" 00:16:52.842 }, 00:16:52.842 "auth": { 00:16:52.842 "state": "completed", 00:16:52.842 "digest": "sha256", 00:16:52.842 "dhgroup": "null" 00:16:52.842 } 00:16:52.842 } 00:16:52.842 ]' 00:16:52.842 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.842 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.842 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.842 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:52.842 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.842 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.842 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.842 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.101 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:16:53.101 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.668 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.926 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.926 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.926 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.926 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.926 00:16:54.184 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.184 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.184 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.184 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.184 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.184 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.184 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.184 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.184 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.184 { 00:16:54.184 "cntlid": 5, 00:16:54.184 "qid": 0, 00:16:54.184 "state": "enabled", 00:16:54.184 "thread": "nvmf_tgt_poll_group_000", 00:16:54.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:54.184 "listen_address": { 00:16:54.184 "trtype": "TCP", 00:16:54.184 "adrfam": "IPv4", 00:16:54.184 "traddr": "10.0.0.2", 00:16:54.184 "trsvcid": "4420" 00:16:54.184 }, 00:16:54.184 "peer_address": { 00:16:54.184 "trtype": "TCP", 00:16:54.184 "adrfam": "IPv4", 00:16:54.184 "traddr": "10.0.0.1", 00:16:54.184 "trsvcid": "58340" 00:16:54.184 }, 00:16:54.184 "auth": { 00:16:54.184 "state": "completed", 00:16:54.184 "digest": "sha256", 00:16:54.184 "dhgroup": "null" 00:16:54.184 } 00:16:54.184 } 00:16:54.184 ]' 00:16:54.184 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.184 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.184 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.443 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:54.443 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.443 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.443 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.443 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.701 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:16:54.701 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:16:55.267 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.267 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:55.267 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.267 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.267 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.267 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.267 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:55.267 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.525 00:16:55.526 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.526 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.526 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.784 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.784 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.784 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.784 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.784 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.784 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.784 { 00:16:55.784 "cntlid": 7, 00:16:55.784 "qid": 0, 00:16:55.784 "state": "enabled", 00:16:55.784 "thread": "nvmf_tgt_poll_group_000", 00:16:55.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:55.784 "listen_address": { 00:16:55.784 "trtype": "TCP", 00:16:55.784 "adrfam": "IPv4", 00:16:55.784 "traddr": "10.0.0.2", 00:16:55.784 "trsvcid": "4420" 00:16:55.784 }, 00:16:55.784 "peer_address": { 00:16:55.784 "trtype": "TCP", 00:16:55.784 "adrfam": "IPv4", 00:16:55.784 "traddr": "10.0.0.1", 00:16:55.784 "trsvcid": "58376" 00:16:55.784 }, 00:16:55.784 "auth": { 00:16:55.784 "state": "completed", 00:16:55.784 "digest": "sha256", 00:16:55.784 "dhgroup": "null" 00:16:55.784 } 00:16:55.784 } 00:16:55.784 ]' 00:16:55.784 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.784 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.784 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.042 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:56.042 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.042 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.042 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.042 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.300 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:16:56.300 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.866 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.867 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:56.867 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.867 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.867 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.867 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.867 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.867 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.867 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.867 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.867 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.125 00:16:57.125 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.125 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.125 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.382 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.382 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.382 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.382 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.382 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.382 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.382 { 00:16:57.382 "cntlid": 9, 00:16:57.382 "qid": 0, 00:16:57.382 "state": "enabled", 00:16:57.382 "thread": "nvmf_tgt_poll_group_000", 00:16:57.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:57.382 "listen_address": { 00:16:57.382 "trtype": "TCP", 00:16:57.382 "adrfam": "IPv4", 00:16:57.382 "traddr": "10.0.0.2", 00:16:57.382 "trsvcid": "4420" 00:16:57.382 }, 00:16:57.382 "peer_address": { 00:16:57.382 "trtype": "TCP", 00:16:57.382 "adrfam": "IPv4", 00:16:57.382 "traddr": "10.0.0.1", 00:16:57.382 "trsvcid": "58398" 00:16:57.382 }, 00:16:57.382 "auth": { 00:16:57.382 "state": "completed", 00:16:57.382 "digest": "sha256", 00:16:57.382 "dhgroup": "ffdhe2048" 00:16:57.382 } 00:16:57.382 } 00:16:57.382 ]' 00:16:57.382 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.382 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.382 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.382 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.382 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.639 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.639 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.639 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.639 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:16:57.639 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:16:58.206 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.206 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:58.206 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.206 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.206 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.206 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.206 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.206 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.464 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.723 00:16:58.723 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.723 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.723 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.981 { 00:16:58.981 "cntlid": 11, 00:16:58.981 "qid": 0, 00:16:58.981 "state": "enabled", 00:16:58.981 "thread": "nvmf_tgt_poll_group_000", 00:16:58.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:58.981 "listen_address": { 00:16:58.981 "trtype": "TCP", 00:16:58.981 "adrfam": "IPv4", 00:16:58.981 "traddr": "10.0.0.2", 00:16:58.981 "trsvcid": "4420" 00:16:58.981 }, 00:16:58.981 "peer_address": { 00:16:58.981 "trtype": "TCP", 00:16:58.981 "adrfam": "IPv4", 00:16:58.981 "traddr": "10.0.0.1", 00:16:58.981 "trsvcid": "58424" 00:16:58.981 }, 00:16:58.981 "auth": { 00:16:58.981 "state": "completed", 00:16:58.981 "digest": "sha256", 00:16:58.981 "dhgroup": "ffdhe2048" 00:16:58.981 } 00:16:58.981 } 00:16:58.981 ]' 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.981 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.982 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.240 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:16:59.240 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:16:59.805 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.805 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:59.805 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.805 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.805 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.805 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.805 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:59.805 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:00.063 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:00.063 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.063 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.063 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:00.063 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.063 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.064 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.064 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.064 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.064 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.064 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.064 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.064 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.322 00:17:00.322 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.322 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.322 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.580 { 00:17:00.580 "cntlid": 13, 00:17:00.580 "qid": 0, 00:17:00.580 "state": "enabled", 00:17:00.580 "thread": "nvmf_tgt_poll_group_000", 00:17:00.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:00.580 "listen_address": { 00:17:00.580 "trtype": "TCP", 00:17:00.580 "adrfam": "IPv4", 00:17:00.580 "traddr": "10.0.0.2", 00:17:00.580 "trsvcid": "4420" 00:17:00.580 }, 00:17:00.580 "peer_address": { 00:17:00.580 "trtype": "TCP", 00:17:00.580 "adrfam": "IPv4", 00:17:00.580 "traddr": "10.0.0.1", 00:17:00.580 "trsvcid": "40786" 00:17:00.580 }, 00:17:00.580 "auth": { 00:17:00.580 "state": "completed", 00:17:00.580 "digest": "sha256", 00:17:00.580 "dhgroup": "ffdhe2048" 00:17:00.580 } 00:17:00.580 } 00:17:00.580 ]' 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.580 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.838 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:00.838 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:01.405 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.405 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:01.405 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.405 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.405 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.405 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.405 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.405 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.663 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.921 00:17:01.921 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.921 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.921 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.178 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.178 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.178 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.178 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.178 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.178 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.178 { 00:17:02.178 "cntlid": 15, 00:17:02.178 "qid": 0, 00:17:02.178 "state": "enabled", 00:17:02.178 "thread": "nvmf_tgt_poll_group_000", 00:17:02.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:02.178 "listen_address": { 00:17:02.178 "trtype": "TCP", 00:17:02.178 "adrfam": "IPv4", 00:17:02.178 "traddr": "10.0.0.2", 00:17:02.178 "trsvcid": "4420" 00:17:02.178 }, 00:17:02.178 "peer_address": { 00:17:02.178 "trtype": "TCP", 00:17:02.178 "adrfam": "IPv4", 00:17:02.178 "traddr": "10.0.0.1", 00:17:02.178 "trsvcid": "40834" 00:17:02.178 }, 00:17:02.178 "auth": { 00:17:02.178 "state": "completed", 00:17:02.178 "digest": "sha256", 00:17:02.179 "dhgroup": "ffdhe2048" 00:17:02.179 } 00:17:02.179 } 00:17:02.179 ]' 00:17:02.179 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.179 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.179 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.179 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:02.179 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.179 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.179 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.179 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.436 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:02.436 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:03.003 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.003 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:03.003 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.003 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.003 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.003 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.003 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.003 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:03.003 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:03.261 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:03.261 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.261 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.261 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:03.261 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.261 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.261 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.261 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.261 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.261 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.261 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.261 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.261 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.519 00:17:03.519 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.519 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.519 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.777 { 00:17:03.777 "cntlid": 17, 00:17:03.777 "qid": 0, 00:17:03.777 "state": "enabled", 00:17:03.777 "thread": "nvmf_tgt_poll_group_000", 00:17:03.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:03.777 "listen_address": { 00:17:03.777 "trtype": "TCP", 00:17:03.777 "adrfam": "IPv4", 00:17:03.777 "traddr": "10.0.0.2", 00:17:03.777 "trsvcid": "4420" 00:17:03.777 }, 00:17:03.777 "peer_address": { 00:17:03.777 "trtype": "TCP", 00:17:03.777 "adrfam": "IPv4", 00:17:03.777 "traddr": "10.0.0.1", 00:17:03.777 "trsvcid": "40860" 00:17:03.777 }, 00:17:03.777 "auth": { 00:17:03.777 "state": "completed", 00:17:03.777 "digest": "sha256", 00:17:03.777 "dhgroup": "ffdhe3072" 00:17:03.777 } 00:17:03.777 } 00:17:03.777 ]' 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.777 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.037 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:04.037 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:04.604 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.604 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:04.604 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.604 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.604 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.604 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.604 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.604 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.861 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.119 00:17:05.119 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.119 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.119 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.376 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.376 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.376 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.376 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.376 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.376 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.376 { 00:17:05.376 "cntlid": 19, 00:17:05.376 "qid": 0, 00:17:05.376 "state": "enabled", 00:17:05.376 "thread": "nvmf_tgt_poll_group_000", 00:17:05.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:05.376 "listen_address": { 00:17:05.376 "trtype": "TCP", 00:17:05.376 "adrfam": "IPv4", 00:17:05.376 "traddr": "10.0.0.2", 00:17:05.376 "trsvcid": "4420" 00:17:05.376 }, 00:17:05.376 "peer_address": { 00:17:05.376 "trtype": "TCP", 00:17:05.376 "adrfam": "IPv4", 00:17:05.376 "traddr": "10.0.0.1", 00:17:05.376 "trsvcid": "40890" 00:17:05.376 }, 00:17:05.376 "auth": { 00:17:05.376 "state": "completed", 00:17:05.376 "digest": "sha256", 00:17:05.376 "dhgroup": "ffdhe3072" 00:17:05.376 } 00:17:05.376 } 00:17:05.376 ]' 00:17:05.376 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.376 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.376 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.376 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.377 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.377 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.377 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.377 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.634 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:05.634 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:06.198 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.199 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:06.199 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.199 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.199 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.199 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.199 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.199 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.456 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.714 00:17:06.714 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.714 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.714 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.971 { 00:17:06.971 "cntlid": 21, 00:17:06.971 "qid": 0, 00:17:06.971 "state": "enabled", 00:17:06.971 "thread": "nvmf_tgt_poll_group_000", 00:17:06.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:06.971 "listen_address": { 00:17:06.971 "trtype": "TCP", 00:17:06.971 "adrfam": "IPv4", 00:17:06.971 "traddr": "10.0.0.2", 00:17:06.971 "trsvcid": "4420" 00:17:06.971 }, 00:17:06.971 "peer_address": { 00:17:06.971 "trtype": "TCP", 00:17:06.971 "adrfam": "IPv4", 00:17:06.971 "traddr": "10.0.0.1", 00:17:06.971 "trsvcid": "40932" 00:17:06.971 }, 00:17:06.971 "auth": { 00:17:06.971 "state": "completed", 00:17:06.971 "digest": "sha256", 00:17:06.971 "dhgroup": "ffdhe3072" 00:17:06.971 } 00:17:06.971 } 00:17:06.971 ]' 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.971 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.228 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:07.228 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:07.792 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.792 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:07.792 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.792 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.792 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.792 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.792 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.792 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:08.049 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:08.049 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.049 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.049 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:08.049 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.049 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.049 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:08.049 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.049 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.049 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.050 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.050 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.050 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.307 00:17:08.307 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.307 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.307 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.565 { 00:17:08.565 "cntlid": 23, 00:17:08.565 "qid": 0, 00:17:08.565 "state": "enabled", 00:17:08.565 "thread": "nvmf_tgt_poll_group_000", 00:17:08.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:08.565 "listen_address": { 00:17:08.565 "trtype": "TCP", 00:17:08.565 "adrfam": "IPv4", 00:17:08.565 "traddr": "10.0.0.2", 00:17:08.565 "trsvcid": "4420" 00:17:08.565 }, 00:17:08.565 "peer_address": { 00:17:08.565 "trtype": "TCP", 00:17:08.565 "adrfam": "IPv4", 00:17:08.565 "traddr": "10.0.0.1", 00:17:08.565 "trsvcid": "40960" 00:17:08.565 }, 00:17:08.565 "auth": { 00:17:08.565 "state": "completed", 00:17:08.565 "digest": "sha256", 00:17:08.565 "dhgroup": "ffdhe3072" 00:17:08.565 } 00:17:08.565 } 00:17:08.565 ]' 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.565 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.823 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:08.823 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:09.388 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.388 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:09.388 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.388 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.388 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.388 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.388 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.388 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.388 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.646 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.903 00:17:09.903 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.903 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.903 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.161 { 00:17:10.161 "cntlid": 25, 00:17:10.161 "qid": 0, 00:17:10.161 "state": "enabled", 00:17:10.161 "thread": "nvmf_tgt_poll_group_000", 00:17:10.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:10.161 "listen_address": { 00:17:10.161 "trtype": "TCP", 00:17:10.161 "adrfam": "IPv4", 00:17:10.161 "traddr": "10.0.0.2", 00:17:10.161 "trsvcid": "4420" 00:17:10.161 }, 00:17:10.161 "peer_address": { 00:17:10.161 "trtype": "TCP", 00:17:10.161 "adrfam": "IPv4", 00:17:10.161 "traddr": "10.0.0.1", 00:17:10.161 "trsvcid": "36122" 00:17:10.161 }, 00:17:10.161 "auth": { 00:17:10.161 "state": "completed", 00:17:10.161 "digest": "sha256", 00:17:10.161 "dhgroup": "ffdhe4096" 00:17:10.161 } 00:17:10.161 } 00:17:10.161 ]' 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.161 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.418 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:10.418 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:10.983 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.983 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:10.983 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.983 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.983 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.983 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.983 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:10.983 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.241 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.498 00:17:11.498 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.498 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.498 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.755 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.755 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.755 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.755 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.755 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.755 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.755 { 00:17:11.755 "cntlid": 27, 00:17:11.755 "qid": 0, 00:17:11.755 "state": "enabled", 00:17:11.755 "thread": "nvmf_tgt_poll_group_000", 00:17:11.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:11.755 "listen_address": { 00:17:11.755 "trtype": "TCP", 00:17:11.755 "adrfam": "IPv4", 00:17:11.755 "traddr": "10.0.0.2", 00:17:11.755 "trsvcid": "4420" 00:17:11.755 }, 00:17:11.755 "peer_address": { 00:17:11.755 "trtype": "TCP", 00:17:11.755 "adrfam": "IPv4", 00:17:11.755 "traddr": "10.0.0.1", 00:17:11.755 "trsvcid": "36132" 00:17:11.755 }, 00:17:11.755 "auth": { 00:17:11.755 "state": "completed", 00:17:11.755 "digest": "sha256", 00:17:11.755 "dhgroup": "ffdhe4096" 00:17:11.755 } 00:17:11.755 } 00:17:11.755 ]' 00:17:11.755 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.755 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.755 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.756 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:11.756 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.756 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.756 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.756 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.013 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:12.013 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:12.577 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.577 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:12.577 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.577 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.577 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.577 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.577 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:12.578 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.835 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.092 00:17:13.092 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.092 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.092 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.349 { 00:17:13.349 "cntlid": 29, 00:17:13.349 "qid": 0, 00:17:13.349 "state": "enabled", 00:17:13.349 "thread": "nvmf_tgt_poll_group_000", 00:17:13.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:13.349 "listen_address": { 00:17:13.349 "trtype": "TCP", 00:17:13.349 "adrfam": "IPv4", 00:17:13.349 "traddr": "10.0.0.2", 00:17:13.349 "trsvcid": "4420" 00:17:13.349 }, 00:17:13.349 "peer_address": { 00:17:13.349 "trtype": "TCP", 00:17:13.349 "adrfam": "IPv4", 00:17:13.349 "traddr": "10.0.0.1", 00:17:13.349 "trsvcid": "36148" 00:17:13.349 }, 00:17:13.349 "auth": { 00:17:13.349 "state": "completed", 00:17:13.349 "digest": "sha256", 00:17:13.349 "dhgroup": "ffdhe4096" 00:17:13.349 } 00:17:13.349 } 00:17:13.349 ]' 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.349 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.350 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.350 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.607 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:13.607 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:14.171 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.171 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:14.171 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.171 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.171 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.171 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.171 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.171 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.686 00:17:14.686 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.686 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.686 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.946 { 00:17:14.946 "cntlid": 31, 00:17:14.946 "qid": 0, 00:17:14.946 "state": "enabled", 00:17:14.946 "thread": "nvmf_tgt_poll_group_000", 00:17:14.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:14.946 "listen_address": { 00:17:14.946 "trtype": "TCP", 00:17:14.946 "adrfam": "IPv4", 00:17:14.946 "traddr": "10.0.0.2", 00:17:14.946 "trsvcid": "4420" 00:17:14.946 }, 00:17:14.946 "peer_address": { 00:17:14.946 "trtype": "TCP", 00:17:14.946 "adrfam": "IPv4", 00:17:14.946 "traddr": "10.0.0.1", 00:17:14.946 "trsvcid": "36166" 00:17:14.946 }, 00:17:14.946 "auth": { 00:17:14.946 "state": "completed", 00:17:14.946 "digest": "sha256", 00:17:14.946 "dhgroup": "ffdhe4096" 00:17:14.946 } 00:17:14.946 } 00:17:14.946 ]' 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.946 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.203 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:15.203 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:15.783 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.783 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:15.783 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.783 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.783 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.783 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.783 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.783 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.783 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.059 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.331 00:17:16.331 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.331 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.331 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.588 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.588 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.588 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.588 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.588 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.588 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.588 { 00:17:16.588 "cntlid": 33, 00:17:16.588 "qid": 0, 00:17:16.588 "state": "enabled", 00:17:16.589 "thread": "nvmf_tgt_poll_group_000", 00:17:16.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:16.589 "listen_address": { 00:17:16.589 "trtype": "TCP", 00:17:16.589 "adrfam": "IPv4", 00:17:16.589 "traddr": "10.0.0.2", 00:17:16.589 "trsvcid": "4420" 00:17:16.589 }, 00:17:16.589 "peer_address": { 00:17:16.589 "trtype": "TCP", 00:17:16.589 "adrfam": "IPv4", 00:17:16.589 "traddr": "10.0.0.1", 00:17:16.589 "trsvcid": "36190" 00:17:16.589 }, 00:17:16.589 "auth": { 00:17:16.589 "state": "completed", 00:17:16.589 "digest": "sha256", 00:17:16.589 "dhgroup": "ffdhe6144" 00:17:16.589 } 00:17:16.589 } 00:17:16.589 ]' 00:17:16.589 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.589 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.589 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.589 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.589 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.589 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.589 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.589 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.847 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:16.847 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:17.412 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.412 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:17.412 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.412 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.412 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.412 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.412 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.412 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.669 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.926 00:17:17.926 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.926 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.926 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.183 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.183 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.183 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.183 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.183 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.183 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.183 { 00:17:18.183 "cntlid": 35, 00:17:18.183 "qid": 0, 00:17:18.183 "state": "enabled", 00:17:18.183 "thread": "nvmf_tgt_poll_group_000", 00:17:18.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:18.183 "listen_address": { 00:17:18.183 "trtype": "TCP", 00:17:18.183 "adrfam": "IPv4", 00:17:18.183 "traddr": "10.0.0.2", 00:17:18.183 "trsvcid": "4420" 00:17:18.183 }, 00:17:18.183 "peer_address": { 00:17:18.183 "trtype": "TCP", 00:17:18.183 "adrfam": "IPv4", 00:17:18.183 "traddr": "10.0.0.1", 00:17:18.183 "trsvcid": "36222" 00:17:18.183 }, 00:17:18.183 "auth": { 00:17:18.183 "state": "completed", 00:17:18.183 "digest": "sha256", 00:17:18.183 "dhgroup": "ffdhe6144" 00:17:18.183 } 00:17:18.183 } 00:17:18.183 ]' 00:17:18.183 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.183 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.183 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.441 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.441 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.441 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.441 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.442 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.699 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:18.699 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:19.265 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.265 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:19.265 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.265 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.265 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.265 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.265 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.265 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.265 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.266 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.830 00:17:19.830 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.830 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.830 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.830 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.830 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.830 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.830 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.830 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.830 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.830 { 00:17:19.830 "cntlid": 37, 00:17:19.830 "qid": 0, 00:17:19.830 "state": "enabled", 00:17:19.830 "thread": "nvmf_tgt_poll_group_000", 00:17:19.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:19.830 "listen_address": { 00:17:19.830 "trtype": "TCP", 00:17:19.830 "adrfam": "IPv4", 00:17:19.830 "traddr": "10.0.0.2", 00:17:19.830 "trsvcid": "4420" 00:17:19.830 }, 00:17:19.830 "peer_address": { 00:17:19.830 "trtype": "TCP", 00:17:19.830 "adrfam": "IPv4", 00:17:19.830 "traddr": "10.0.0.1", 00:17:19.830 "trsvcid": "50164" 00:17:19.830 }, 00:17:19.830 "auth": { 00:17:19.830 "state": "completed", 00:17:19.830 "digest": "sha256", 00:17:19.830 "dhgroup": "ffdhe6144" 00:17:19.830 } 00:17:19.830 } 00:17:19.830 ]' 00:17:20.087 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.087 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.088 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.088 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.088 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.088 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.088 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.088 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.345 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:20.345 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:20.911 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.911 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:20.911 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.911 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.911 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.912 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.169 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.169 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:21.169 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.169 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.427 00:17:21.427 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.427 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.427 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.684 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.684 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.684 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.684 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.684 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.684 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.684 { 00:17:21.684 "cntlid": 39, 00:17:21.684 "qid": 0, 00:17:21.684 "state": "enabled", 00:17:21.684 "thread": "nvmf_tgt_poll_group_000", 00:17:21.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:21.684 "listen_address": { 00:17:21.684 "trtype": "TCP", 00:17:21.684 "adrfam": "IPv4", 00:17:21.684 "traddr": "10.0.0.2", 00:17:21.684 "trsvcid": "4420" 00:17:21.684 }, 00:17:21.684 "peer_address": { 00:17:21.684 "trtype": "TCP", 00:17:21.684 "adrfam": "IPv4", 00:17:21.684 "traddr": "10.0.0.1", 00:17:21.684 "trsvcid": "50174" 00:17:21.684 }, 00:17:21.684 "auth": { 00:17:21.684 "state": "completed", 00:17:21.684 "digest": "sha256", 00:17:21.684 "dhgroup": "ffdhe6144" 00:17:21.684 } 00:17:21.684 } 00:17:21.684 ]' 00:17:21.684 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.684 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.684 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.684 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:21.684 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.685 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.685 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.685 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.942 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:21.942 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:22.508 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.508 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:22.508 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.508 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.508 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.508 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.508 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.508 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.508 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.766 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.332 00:17:23.332 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.332 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.332 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.332 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.332 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.332 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.332 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.332 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.332 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.332 { 00:17:23.332 "cntlid": 41, 00:17:23.332 "qid": 0, 00:17:23.332 "state": "enabled", 00:17:23.332 "thread": "nvmf_tgt_poll_group_000", 00:17:23.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:23.332 "listen_address": { 00:17:23.332 "trtype": "TCP", 00:17:23.332 "adrfam": "IPv4", 00:17:23.332 "traddr": "10.0.0.2", 00:17:23.332 "trsvcid": "4420" 00:17:23.332 }, 00:17:23.332 "peer_address": { 00:17:23.332 "trtype": "TCP", 00:17:23.332 "adrfam": "IPv4", 00:17:23.332 "traddr": "10.0.0.1", 00:17:23.332 "trsvcid": "50206" 00:17:23.332 }, 00:17:23.332 "auth": { 00:17:23.332 "state": "completed", 00:17:23.332 "digest": "sha256", 00:17:23.332 "dhgroup": "ffdhe8192" 00:17:23.332 } 00:17:23.332 } 00:17:23.332 ]' 00:17:23.332 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.590 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.590 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.590 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.590 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.590 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.590 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.590 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.847 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:23.847 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:24.412 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.412 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:24.412 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.412 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.412 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.412 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.412 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:24.412 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.670 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.927 00:17:24.927 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.927 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.928 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.185 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.185 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.185 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.185 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.185 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.185 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.185 { 00:17:25.185 "cntlid": 43, 00:17:25.185 "qid": 0, 00:17:25.185 "state": "enabled", 00:17:25.185 "thread": "nvmf_tgt_poll_group_000", 00:17:25.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:25.185 "listen_address": { 00:17:25.185 "trtype": "TCP", 00:17:25.185 "adrfam": "IPv4", 00:17:25.185 "traddr": "10.0.0.2", 00:17:25.185 "trsvcid": "4420" 00:17:25.185 }, 00:17:25.185 "peer_address": { 00:17:25.185 "trtype": "TCP", 00:17:25.185 "adrfam": "IPv4", 00:17:25.185 "traddr": "10.0.0.1", 00:17:25.185 "trsvcid": "50248" 00:17:25.185 }, 00:17:25.185 "auth": { 00:17:25.185 "state": "completed", 00:17:25.185 "digest": "sha256", 00:17:25.185 "dhgroup": "ffdhe8192" 00:17:25.185 } 00:17:25.185 } 00:17:25.185 ]' 00:17:25.185 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.185 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.185 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.443 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.443 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.443 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.443 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.443 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.700 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:25.700 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:26.265 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.265 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:26.265 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.265 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.265 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.265 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.265 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:26.265 05:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:26.265 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:26.265 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.265 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:26.265 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:26.265 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.265 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.265 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.265 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.265 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.523 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.523 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.523 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.523 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.781 00:17:26.781 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.781 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.781 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.039 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.039 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.039 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.039 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.039 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.039 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.039 { 00:17:27.039 "cntlid": 45, 00:17:27.039 "qid": 0, 00:17:27.039 "state": "enabled", 00:17:27.039 "thread": "nvmf_tgt_poll_group_000", 00:17:27.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:27.039 "listen_address": { 00:17:27.039 "trtype": "TCP", 00:17:27.039 "adrfam": "IPv4", 00:17:27.039 "traddr": "10.0.0.2", 00:17:27.039 "trsvcid": "4420" 00:17:27.039 }, 00:17:27.039 "peer_address": { 00:17:27.039 "trtype": "TCP", 00:17:27.039 "adrfam": "IPv4", 00:17:27.039 "traddr": "10.0.0.1", 00:17:27.039 "trsvcid": "50272" 00:17:27.039 }, 00:17:27.039 "auth": { 00:17:27.039 "state": "completed", 00:17:27.039 "digest": "sha256", 00:17:27.039 "dhgroup": "ffdhe8192" 00:17:27.039 } 00:17:27.039 } 00:17:27.039 ]' 00:17:27.039 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.039 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.039 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.039 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.298 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.298 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.298 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.298 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.298 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:27.298 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:27.865 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.865 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:27.865 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.865 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.865 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.865 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.865 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.865 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.123 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.124 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.688 00:17:28.688 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.688 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.688 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.946 { 00:17:28.946 "cntlid": 47, 00:17:28.946 "qid": 0, 00:17:28.946 "state": "enabled", 00:17:28.946 "thread": "nvmf_tgt_poll_group_000", 00:17:28.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:28.946 "listen_address": { 00:17:28.946 "trtype": "TCP", 00:17:28.946 "adrfam": "IPv4", 00:17:28.946 "traddr": "10.0.0.2", 00:17:28.946 "trsvcid": "4420" 00:17:28.946 }, 00:17:28.946 "peer_address": { 00:17:28.946 "trtype": "TCP", 00:17:28.946 "adrfam": "IPv4", 00:17:28.946 "traddr": "10.0.0.1", 00:17:28.946 "trsvcid": "50288" 00:17:28.946 }, 00:17:28.946 "auth": { 00:17:28.946 "state": "completed", 00:17:28.946 "digest": "sha256", 00:17:28.946 "dhgroup": "ffdhe8192" 00:17:28.946 } 00:17:28.946 } 00:17:28.946 ]' 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.946 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.204 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:29.204 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:29.769 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.769 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:29.769 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.769 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.769 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.769 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:29.769 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.769 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.769 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.769 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.027 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.285 00:17:30.285 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.285 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.285 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.543 { 00:17:30.543 "cntlid": 49, 00:17:30.543 "qid": 0, 00:17:30.543 "state": "enabled", 00:17:30.543 "thread": "nvmf_tgt_poll_group_000", 00:17:30.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:30.543 "listen_address": { 00:17:30.543 "trtype": "TCP", 00:17:30.543 "adrfam": "IPv4", 00:17:30.543 "traddr": "10.0.0.2", 00:17:30.543 "trsvcid": "4420" 00:17:30.543 }, 00:17:30.543 "peer_address": { 00:17:30.543 "trtype": "TCP", 00:17:30.543 "adrfam": "IPv4", 00:17:30.543 "traddr": "10.0.0.1", 00:17:30.543 "trsvcid": "36946" 00:17:30.543 }, 00:17:30.543 "auth": { 00:17:30.543 "state": "completed", 00:17:30.543 "digest": "sha384", 00:17:30.543 "dhgroup": "null" 00:17:30.543 } 00:17:30.543 } 00:17:30.543 ]' 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.543 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.801 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:30.801 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:31.366 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.366 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:31.366 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.366 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.366 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.366 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.366 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:31.366 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:31.623 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:31.623 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.623 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.623 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:31.623 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.623 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.623 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.624 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.624 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.624 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.624 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.624 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.624 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.882 00:17:31.882 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.882 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.882 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.140 { 00:17:32.140 "cntlid": 51, 00:17:32.140 "qid": 0, 00:17:32.140 "state": "enabled", 00:17:32.140 "thread": "nvmf_tgt_poll_group_000", 00:17:32.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:32.140 "listen_address": { 00:17:32.140 "trtype": "TCP", 00:17:32.140 "adrfam": "IPv4", 00:17:32.140 "traddr": "10.0.0.2", 00:17:32.140 "trsvcid": "4420" 00:17:32.140 }, 00:17:32.140 "peer_address": { 00:17:32.140 "trtype": "TCP", 00:17:32.140 "adrfam": "IPv4", 00:17:32.140 "traddr": "10.0.0.1", 00:17:32.140 "trsvcid": "36960" 00:17:32.140 }, 00:17:32.140 "auth": { 00:17:32.140 "state": "completed", 00:17:32.140 "digest": "sha384", 00:17:32.140 "dhgroup": "null" 00:17:32.140 } 00:17:32.140 } 00:17:32.140 ]' 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.140 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.399 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:32.399 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:32.964 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.964 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:32.964 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.964 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.964 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.964 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.964 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:32.964 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.222 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.480 00:17:33.480 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.480 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.480 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.480 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.480 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.480 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.480 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.480 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.480 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.480 { 00:17:33.480 "cntlid": 53, 00:17:33.480 "qid": 0, 00:17:33.480 "state": "enabled", 00:17:33.480 "thread": "nvmf_tgt_poll_group_000", 00:17:33.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:33.480 "listen_address": { 00:17:33.480 "trtype": "TCP", 00:17:33.480 "adrfam": "IPv4", 00:17:33.480 "traddr": "10.0.0.2", 00:17:33.480 "trsvcid": "4420" 00:17:33.480 }, 00:17:33.480 "peer_address": { 00:17:33.480 "trtype": "TCP", 00:17:33.480 "adrfam": "IPv4", 00:17:33.480 "traddr": "10.0.0.1", 00:17:33.480 "trsvcid": "36990" 00:17:33.480 }, 00:17:33.480 "auth": { 00:17:33.480 "state": "completed", 00:17:33.480 "digest": "sha384", 00:17:33.480 "dhgroup": "null" 00:17:33.480 } 00:17:33.480 } 00:17:33.480 ]' 00:17:33.480 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.480 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.738 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.738 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:33.738 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.738 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.738 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.738 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.996 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:33.996 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.562 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.820 00:17:34.820 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.820 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.820 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.077 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.077 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.077 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.077 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.077 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.077 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.077 { 00:17:35.077 "cntlid": 55, 00:17:35.077 "qid": 0, 00:17:35.077 "state": "enabled", 00:17:35.077 "thread": "nvmf_tgt_poll_group_000", 00:17:35.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:35.077 "listen_address": { 00:17:35.077 "trtype": "TCP", 00:17:35.077 "adrfam": "IPv4", 00:17:35.077 "traddr": "10.0.0.2", 00:17:35.077 "trsvcid": "4420" 00:17:35.077 }, 00:17:35.077 "peer_address": { 00:17:35.077 "trtype": "TCP", 00:17:35.077 "adrfam": "IPv4", 00:17:35.077 "traddr": "10.0.0.1", 00:17:35.077 "trsvcid": "37010" 00:17:35.077 }, 00:17:35.077 "auth": { 00:17:35.077 "state": "completed", 00:17:35.077 "digest": "sha384", 00:17:35.077 "dhgroup": "null" 00:17:35.077 } 00:17:35.077 } 00:17:35.077 ]' 00:17:35.077 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.077 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.077 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.077 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:35.077 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.335 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.335 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.335 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.335 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:35.335 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:35.901 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.901 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:35.901 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.901 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.901 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.901 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.901 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.901 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.901 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.159 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.417 00:17:36.417 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.417 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.417 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.674 { 00:17:36.674 "cntlid": 57, 00:17:36.674 "qid": 0, 00:17:36.674 "state": "enabled", 00:17:36.674 "thread": "nvmf_tgt_poll_group_000", 00:17:36.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:36.674 "listen_address": { 00:17:36.674 "trtype": "TCP", 00:17:36.674 "adrfam": "IPv4", 00:17:36.674 "traddr": "10.0.0.2", 00:17:36.674 "trsvcid": "4420" 00:17:36.674 }, 00:17:36.674 "peer_address": { 00:17:36.674 "trtype": "TCP", 00:17:36.674 "adrfam": "IPv4", 00:17:36.674 "traddr": "10.0.0.1", 00:17:36.674 "trsvcid": "37040" 00:17:36.674 }, 00:17:36.674 "auth": { 00:17:36.674 "state": "completed", 00:17:36.674 "digest": "sha384", 00:17:36.674 "dhgroup": "ffdhe2048" 00:17:36.674 } 00:17:36.674 } 00:17:36.674 ]' 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.674 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.932 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:36.932 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:37.497 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.497 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.497 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.497 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.497 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.497 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.497 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:37.497 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:37.754 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:37.754 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.754 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.754 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:37.754 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.754 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.754 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.754 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.754 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.754 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.754 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.755 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.755 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.012 00:17:38.012 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.012 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.012 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.270 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.270 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.270 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.270 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.270 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.270 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.270 { 00:17:38.270 "cntlid": 59, 00:17:38.270 "qid": 0, 00:17:38.270 "state": "enabled", 00:17:38.270 "thread": "nvmf_tgt_poll_group_000", 00:17:38.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:38.270 "listen_address": { 00:17:38.270 "trtype": "TCP", 00:17:38.270 "adrfam": "IPv4", 00:17:38.270 "traddr": "10.0.0.2", 00:17:38.270 "trsvcid": "4420" 00:17:38.270 }, 00:17:38.270 "peer_address": { 00:17:38.270 "trtype": "TCP", 00:17:38.270 "adrfam": "IPv4", 00:17:38.270 "traddr": "10.0.0.1", 00:17:38.270 "trsvcid": "37056" 00:17:38.270 }, 00:17:38.270 "auth": { 00:17:38.270 "state": "completed", 00:17:38.270 "digest": "sha384", 00:17:38.270 "dhgroup": "ffdhe2048" 00:17:38.270 } 00:17:38.270 } 00:17:38.270 ]' 00:17:38.270 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.270 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.270 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.270 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.270 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.270 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.270 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.270 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.527 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:38.527 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:39.093 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.093 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:39.093 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.093 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.093 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.093 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.093 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.093 05:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.350 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.608 00:17:39.608 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.608 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.608 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.866 { 00:17:39.866 "cntlid": 61, 00:17:39.866 "qid": 0, 00:17:39.866 "state": "enabled", 00:17:39.866 "thread": "nvmf_tgt_poll_group_000", 00:17:39.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:39.866 "listen_address": { 00:17:39.866 "trtype": "TCP", 00:17:39.866 "adrfam": "IPv4", 00:17:39.866 "traddr": "10.0.0.2", 00:17:39.866 "trsvcid": "4420" 00:17:39.866 }, 00:17:39.866 "peer_address": { 00:17:39.866 "trtype": "TCP", 00:17:39.866 "adrfam": "IPv4", 00:17:39.866 "traddr": "10.0.0.1", 00:17:39.866 "trsvcid": "37060" 00:17:39.866 }, 00:17:39.866 "auth": { 00:17:39.866 "state": "completed", 00:17:39.866 "digest": "sha384", 00:17:39.866 "dhgroup": "ffdhe2048" 00:17:39.866 } 00:17:39.866 } 00:17:39.866 ]' 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.866 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.123 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:40.123 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:40.688 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.688 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:40.688 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.688 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.688 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.688 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.688 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:40.688 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.945 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.203 00:17:41.203 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.203 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.203 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.461 { 00:17:41.461 "cntlid": 63, 00:17:41.461 "qid": 0, 00:17:41.461 "state": "enabled", 00:17:41.461 "thread": "nvmf_tgt_poll_group_000", 00:17:41.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:41.461 "listen_address": { 00:17:41.461 "trtype": "TCP", 00:17:41.461 "adrfam": "IPv4", 00:17:41.461 "traddr": "10.0.0.2", 00:17:41.461 "trsvcid": "4420" 00:17:41.461 }, 00:17:41.461 "peer_address": { 00:17:41.461 "trtype": "TCP", 00:17:41.461 "adrfam": "IPv4", 00:17:41.461 "traddr": "10.0.0.1", 00:17:41.461 "trsvcid": "37086" 00:17:41.461 }, 00:17:41.461 "auth": { 00:17:41.461 "state": "completed", 00:17:41.461 "digest": "sha384", 00:17:41.461 "dhgroup": "ffdhe2048" 00:17:41.461 } 00:17:41.461 } 00:17:41.461 ]' 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.461 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.719 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:41.719 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:42.284 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.284 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:42.284 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.284 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.284 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.284 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.284 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.284 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:42.284 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.542 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.803 00:17:42.803 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.803 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.803 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.087 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.087 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.087 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.087 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.087 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.087 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.087 { 00:17:43.087 "cntlid": 65, 00:17:43.087 "qid": 0, 00:17:43.087 "state": "enabled", 00:17:43.087 "thread": "nvmf_tgt_poll_group_000", 00:17:43.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:43.087 "listen_address": { 00:17:43.087 "trtype": "TCP", 00:17:43.087 "adrfam": "IPv4", 00:17:43.087 "traddr": "10.0.0.2", 00:17:43.087 "trsvcid": "4420" 00:17:43.087 }, 00:17:43.087 "peer_address": { 00:17:43.087 "trtype": "TCP", 00:17:43.087 "adrfam": "IPv4", 00:17:43.087 "traddr": "10.0.0.1", 00:17:43.087 "trsvcid": "37116" 00:17:43.087 }, 00:17:43.087 "auth": { 00:17:43.088 "state": "completed", 00:17:43.088 "digest": "sha384", 00:17:43.088 "dhgroup": "ffdhe3072" 00:17:43.088 } 00:17:43.088 } 00:17:43.088 ]' 00:17:43.088 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.088 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.088 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.088 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:43.088 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.088 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.088 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.088 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.370 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:43.370 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.950 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.207 00:17:44.207 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.207 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.207 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.465 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.465 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.465 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.465 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.465 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.465 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.465 { 00:17:44.465 "cntlid": 67, 00:17:44.465 "qid": 0, 00:17:44.465 "state": "enabled", 00:17:44.465 "thread": "nvmf_tgt_poll_group_000", 00:17:44.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:44.465 "listen_address": { 00:17:44.465 "trtype": "TCP", 00:17:44.465 "adrfam": "IPv4", 00:17:44.465 "traddr": "10.0.0.2", 00:17:44.465 "trsvcid": "4420" 00:17:44.465 }, 00:17:44.465 "peer_address": { 00:17:44.465 "trtype": "TCP", 00:17:44.465 "adrfam": "IPv4", 00:17:44.465 "traddr": "10.0.0.1", 00:17:44.465 "trsvcid": "37152" 00:17:44.465 }, 00:17:44.465 "auth": { 00:17:44.465 "state": "completed", 00:17:44.465 "digest": "sha384", 00:17:44.465 "dhgroup": "ffdhe3072" 00:17:44.465 } 00:17:44.465 } 00:17:44.465 ]' 00:17:44.465 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.722 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.722 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.722 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.722 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.722 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.722 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.722 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.980 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:44.980 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.545 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.546 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.546 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.546 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.546 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.546 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.803 00:17:46.061 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.061 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.061 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.061 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.061 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.061 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.061 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.061 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.061 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.061 { 00:17:46.061 "cntlid": 69, 00:17:46.061 "qid": 0, 00:17:46.061 "state": "enabled", 00:17:46.061 "thread": "nvmf_tgt_poll_group_000", 00:17:46.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:46.061 "listen_address": { 00:17:46.061 "trtype": "TCP", 00:17:46.061 "adrfam": "IPv4", 00:17:46.061 "traddr": "10.0.0.2", 00:17:46.061 "trsvcid": "4420" 00:17:46.061 }, 00:17:46.061 "peer_address": { 00:17:46.061 "trtype": "TCP", 00:17:46.061 "adrfam": "IPv4", 00:17:46.061 "traddr": "10.0.0.1", 00:17:46.061 "trsvcid": "37174" 00:17:46.061 }, 00:17:46.061 "auth": { 00:17:46.061 "state": "completed", 00:17:46.061 "digest": "sha384", 00:17:46.061 "dhgroup": "ffdhe3072" 00:17:46.061 } 00:17:46.061 } 00:17:46.061 ]' 00:17:46.061 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.319 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.319 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.319 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.319 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.319 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.319 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.319 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.577 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:46.577 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:47.142 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.142 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:47.142 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.142 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.142 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.142 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.142 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.142 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.142 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:47.143 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.143 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.143 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:47.143 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.143 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.143 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:47.143 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.143 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.143 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.143 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.399 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.399 00:17:47.657 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.657 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.657 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.657 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.657 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.657 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.657 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.657 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.657 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.657 { 00:17:47.657 "cntlid": 71, 00:17:47.657 "qid": 0, 00:17:47.657 "state": "enabled", 00:17:47.657 "thread": "nvmf_tgt_poll_group_000", 00:17:47.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:47.657 "listen_address": { 00:17:47.657 "trtype": "TCP", 00:17:47.657 "adrfam": "IPv4", 00:17:47.657 "traddr": "10.0.0.2", 00:17:47.657 "trsvcid": "4420" 00:17:47.657 }, 00:17:47.657 "peer_address": { 00:17:47.657 "trtype": "TCP", 00:17:47.657 "adrfam": "IPv4", 00:17:47.657 "traddr": "10.0.0.1", 00:17:47.657 "trsvcid": "37198" 00:17:47.657 }, 00:17:47.657 "auth": { 00:17:47.657 "state": "completed", 00:17:47.657 "digest": "sha384", 00:17:47.657 "dhgroup": "ffdhe3072" 00:17:47.657 } 00:17:47.657 } 00:17:47.657 ]' 00:17:47.657 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.914 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.914 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.914 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.914 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.914 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.914 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.914 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.172 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:48.172 05:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.737 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.738 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.738 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.738 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.738 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.738 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.738 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.303 00:17:49.303 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.303 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.303 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.303 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.303 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.303 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.303 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.303 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.303 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.303 { 00:17:49.303 "cntlid": 73, 00:17:49.303 "qid": 0, 00:17:49.303 "state": "enabled", 00:17:49.303 "thread": "nvmf_tgt_poll_group_000", 00:17:49.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:49.303 "listen_address": { 00:17:49.303 "trtype": "TCP", 00:17:49.303 "adrfam": "IPv4", 00:17:49.303 "traddr": "10.0.0.2", 00:17:49.303 "trsvcid": "4420" 00:17:49.303 }, 00:17:49.303 "peer_address": { 00:17:49.303 "trtype": "TCP", 00:17:49.303 "adrfam": "IPv4", 00:17:49.303 "traddr": "10.0.0.1", 00:17:49.303 "trsvcid": "56840" 00:17:49.303 }, 00:17:49.303 "auth": { 00:17:49.303 "state": "completed", 00:17:49.303 "digest": "sha384", 00:17:49.303 "dhgroup": "ffdhe4096" 00:17:49.303 } 00:17:49.303 } 00:17:49.303 ]' 00:17:49.303 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.303 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.303 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.561 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:49.561 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.561 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.561 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.561 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.818 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:49.819 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.384 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.641 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.641 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.641 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.641 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.899 00:17:50.899 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.899 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.899 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.899 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.899 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.899 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.899 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.899 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.899 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.899 { 00:17:50.899 "cntlid": 75, 00:17:50.899 "qid": 0, 00:17:50.899 "state": "enabled", 00:17:50.899 "thread": "nvmf_tgt_poll_group_000", 00:17:50.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:50.899 "listen_address": { 00:17:50.899 "trtype": "TCP", 00:17:50.899 "adrfam": "IPv4", 00:17:50.899 "traddr": "10.0.0.2", 00:17:50.899 "trsvcid": "4420" 00:17:50.899 }, 00:17:50.899 "peer_address": { 00:17:50.899 "trtype": "TCP", 00:17:50.899 "adrfam": "IPv4", 00:17:50.899 "traddr": "10.0.0.1", 00:17:50.899 "trsvcid": "56876" 00:17:50.899 }, 00:17:50.899 "auth": { 00:17:50.899 "state": "completed", 00:17:50.899 "digest": "sha384", 00:17:50.899 "dhgroup": "ffdhe4096" 00:17:50.899 } 00:17:50.899 } 00:17:50.899 ]' 00:17:51.156 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.156 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.156 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.156 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:51.156 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.156 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.156 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.156 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.414 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:51.414 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:51.979 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.979 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:51.979 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.979 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.979 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.979 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.979 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:51.979 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.237 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.495 00:17:52.495 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.495 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.495 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.753 { 00:17:52.753 "cntlid": 77, 00:17:52.753 "qid": 0, 00:17:52.753 "state": "enabled", 00:17:52.753 "thread": "nvmf_tgt_poll_group_000", 00:17:52.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:52.753 "listen_address": { 00:17:52.753 "trtype": "TCP", 00:17:52.753 "adrfam": "IPv4", 00:17:52.753 "traddr": "10.0.0.2", 00:17:52.753 "trsvcid": "4420" 00:17:52.753 }, 00:17:52.753 "peer_address": { 00:17:52.753 "trtype": "TCP", 00:17:52.753 "adrfam": "IPv4", 00:17:52.753 "traddr": "10.0.0.1", 00:17:52.753 "trsvcid": "56902" 00:17:52.753 }, 00:17:52.753 "auth": { 00:17:52.753 "state": "completed", 00:17:52.753 "digest": "sha384", 00:17:52.753 "dhgroup": "ffdhe4096" 00:17:52.753 } 00:17:52.753 } 00:17:52.753 ]' 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.753 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.010 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:53.010 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:53.578 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.579 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:53.579 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.579 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.579 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.579 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.579 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.579 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.836 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.094 00:17:54.094 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.094 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.094 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.352 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.352 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.352 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.352 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.352 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.352 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.352 { 00:17:54.352 "cntlid": 79, 00:17:54.352 "qid": 0, 00:17:54.352 "state": "enabled", 00:17:54.352 "thread": "nvmf_tgt_poll_group_000", 00:17:54.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:54.352 "listen_address": { 00:17:54.352 "trtype": "TCP", 00:17:54.352 "adrfam": "IPv4", 00:17:54.352 "traddr": "10.0.0.2", 00:17:54.352 "trsvcid": "4420" 00:17:54.352 }, 00:17:54.352 "peer_address": { 00:17:54.352 "trtype": "TCP", 00:17:54.352 "adrfam": "IPv4", 00:17:54.352 "traddr": "10.0.0.1", 00:17:54.352 "trsvcid": "56926" 00:17:54.352 }, 00:17:54.352 "auth": { 00:17:54.352 "state": "completed", 00:17:54.352 "digest": "sha384", 00:17:54.352 "dhgroup": "ffdhe4096" 00:17:54.352 } 00:17:54.352 } 00:17:54.352 ]' 00:17:54.352 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.352 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.352 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.352 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.352 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.352 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.352 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.352 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.610 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:54.610 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:17:55.174 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.174 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:55.174 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.174 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.174 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.174 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.174 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.174 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:55.174 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.432 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.690 00:17:55.690 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.690 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.690 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.947 { 00:17:55.947 "cntlid": 81, 00:17:55.947 "qid": 0, 00:17:55.947 "state": "enabled", 00:17:55.947 "thread": "nvmf_tgt_poll_group_000", 00:17:55.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:55.947 "listen_address": { 00:17:55.947 "trtype": "TCP", 00:17:55.947 "adrfam": "IPv4", 00:17:55.947 "traddr": "10.0.0.2", 00:17:55.947 "trsvcid": "4420" 00:17:55.947 }, 00:17:55.947 "peer_address": { 00:17:55.947 "trtype": "TCP", 00:17:55.947 "adrfam": "IPv4", 00:17:55.947 "traddr": "10.0.0.1", 00:17:55.947 "trsvcid": "56954" 00:17:55.947 }, 00:17:55.947 "auth": { 00:17:55.947 "state": "completed", 00:17:55.947 "digest": "sha384", 00:17:55.947 "dhgroup": "ffdhe6144" 00:17:55.947 } 00:17:55.947 } 00:17:55.947 ]' 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.947 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.205 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:56.205 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:17:56.769 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.770 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:56.770 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.770 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.770 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.770 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.770 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:56.770 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.028 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.285 00:17:57.285 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.285 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.285 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.542 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.542 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.542 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.542 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.542 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.542 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.542 { 00:17:57.542 "cntlid": 83, 00:17:57.542 "qid": 0, 00:17:57.542 "state": "enabled", 00:17:57.542 "thread": "nvmf_tgt_poll_group_000", 00:17:57.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:57.542 "listen_address": { 00:17:57.542 "trtype": "TCP", 00:17:57.542 "adrfam": "IPv4", 00:17:57.542 "traddr": "10.0.0.2", 00:17:57.542 "trsvcid": "4420" 00:17:57.542 }, 00:17:57.542 "peer_address": { 00:17:57.542 "trtype": "TCP", 00:17:57.542 "adrfam": "IPv4", 00:17:57.542 "traddr": "10.0.0.1", 00:17:57.542 "trsvcid": "56990" 00:17:57.542 }, 00:17:57.542 "auth": { 00:17:57.542 "state": "completed", 00:17:57.542 "digest": "sha384", 00:17:57.542 "dhgroup": "ffdhe6144" 00:17:57.542 } 00:17:57.542 } 00:17:57.542 ]' 00:17:57.542 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.543 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.543 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.543 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.543 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.800 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.800 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.800 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.800 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:57.800 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:17:58.365 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.365 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:58.365 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.365 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.365 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.365 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.365 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:58.365 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:58.622 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.623 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.880 00:17:58.880 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.880 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.880 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.138 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.138 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.138 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.138 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.138 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.138 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.138 { 00:17:59.138 "cntlid": 85, 00:17:59.138 "qid": 0, 00:17:59.138 "state": "enabled", 00:17:59.138 "thread": "nvmf_tgt_poll_group_000", 00:17:59.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:59.138 "listen_address": { 00:17:59.138 "trtype": "TCP", 00:17:59.138 "adrfam": "IPv4", 00:17:59.138 "traddr": "10.0.0.2", 00:17:59.138 "trsvcid": "4420" 00:17:59.138 }, 00:17:59.138 "peer_address": { 00:17:59.138 "trtype": "TCP", 00:17:59.138 "adrfam": "IPv4", 00:17:59.138 "traddr": "10.0.0.1", 00:17:59.138 "trsvcid": "57016" 00:17:59.138 }, 00:17:59.138 "auth": { 00:17:59.138 "state": "completed", 00:17:59.138 "digest": "sha384", 00:17:59.138 "dhgroup": "ffdhe6144" 00:17:59.138 } 00:17:59.138 } 00:17:59.138 ]' 00:17:59.138 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.138 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.138 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.473 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.473 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.473 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.473 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.473 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.473 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:17:59.473 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:00.081 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.081 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:00.081 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.081 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.081 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.081 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.081 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:00.081 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.340 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.597 00:18:00.597 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.597 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.598 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.856 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.856 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.856 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.856 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.856 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.856 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.856 { 00:18:00.856 "cntlid": 87, 00:18:00.856 "qid": 0, 00:18:00.856 "state": "enabled", 00:18:00.856 "thread": "nvmf_tgt_poll_group_000", 00:18:00.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:00.856 "listen_address": { 00:18:00.856 "trtype": "TCP", 00:18:00.856 "adrfam": "IPv4", 00:18:00.856 "traddr": "10.0.0.2", 00:18:00.856 "trsvcid": "4420" 00:18:00.856 }, 00:18:00.856 "peer_address": { 00:18:00.856 "trtype": "TCP", 00:18:00.856 "adrfam": "IPv4", 00:18:00.856 "traddr": "10.0.0.1", 00:18:00.856 "trsvcid": "47036" 00:18:00.856 }, 00:18:00.856 "auth": { 00:18:00.856 "state": "completed", 00:18:00.856 "digest": "sha384", 00:18:00.856 "dhgroup": "ffdhe6144" 00:18:00.856 } 00:18:00.856 } 00:18:00.856 ]' 00:18:00.856 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.856 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.856 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.856 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.856 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.113 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.113 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.113 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.113 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:01.113 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:01.677 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.677 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:01.677 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.677 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.677 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.677 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.677 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.677 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:01.677 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.935 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.500 00:18:02.500 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.500 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.500 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.757 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.758 { 00:18:02.758 "cntlid": 89, 00:18:02.758 "qid": 0, 00:18:02.758 "state": "enabled", 00:18:02.758 "thread": "nvmf_tgt_poll_group_000", 00:18:02.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:02.758 "listen_address": { 00:18:02.758 "trtype": "TCP", 00:18:02.758 "adrfam": "IPv4", 00:18:02.758 "traddr": "10.0.0.2", 00:18:02.758 "trsvcid": "4420" 00:18:02.758 }, 00:18:02.758 "peer_address": { 00:18:02.758 "trtype": "TCP", 00:18:02.758 "adrfam": "IPv4", 00:18:02.758 "traddr": "10.0.0.1", 00:18:02.758 "trsvcid": "47062" 00:18:02.758 }, 00:18:02.758 "auth": { 00:18:02.758 "state": "completed", 00:18:02.758 "digest": "sha384", 00:18:02.758 "dhgroup": "ffdhe8192" 00:18:02.758 } 00:18:02.758 } 00:18:02.758 ]' 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.758 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.015 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:03.015 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:03.580 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.580 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:03.580 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.580 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.580 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.580 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.580 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:03.580 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.837 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.401 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.401 { 00:18:04.401 "cntlid": 91, 00:18:04.401 "qid": 0, 00:18:04.401 "state": "enabled", 00:18:04.401 "thread": "nvmf_tgt_poll_group_000", 00:18:04.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:04.401 "listen_address": { 00:18:04.401 "trtype": "TCP", 00:18:04.401 "adrfam": "IPv4", 00:18:04.401 "traddr": "10.0.0.2", 00:18:04.401 "trsvcid": "4420" 00:18:04.401 }, 00:18:04.401 "peer_address": { 00:18:04.401 "trtype": "TCP", 00:18:04.401 "adrfam": "IPv4", 00:18:04.401 "traddr": "10.0.0.1", 00:18:04.401 "trsvcid": "47096" 00:18:04.401 }, 00:18:04.401 "auth": { 00:18:04.401 "state": "completed", 00:18:04.401 "digest": "sha384", 00:18:04.401 "dhgroup": "ffdhe8192" 00:18:04.401 } 00:18:04.401 } 00:18:04.401 ]' 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.401 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.658 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.658 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.658 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.658 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.658 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.659 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:04.659 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:05.224 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.224 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.481 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.482 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.482 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.046 00:18:06.046 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.046 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.046 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.303 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.303 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.303 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.303 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.303 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.303 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.303 { 00:18:06.303 "cntlid": 93, 00:18:06.304 "qid": 0, 00:18:06.304 "state": "enabled", 00:18:06.304 "thread": "nvmf_tgt_poll_group_000", 00:18:06.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:06.304 "listen_address": { 00:18:06.304 "trtype": "TCP", 00:18:06.304 "adrfam": "IPv4", 00:18:06.304 "traddr": "10.0.0.2", 00:18:06.304 "trsvcid": "4420" 00:18:06.304 }, 00:18:06.304 "peer_address": { 00:18:06.304 "trtype": "TCP", 00:18:06.304 "adrfam": "IPv4", 00:18:06.304 "traddr": "10.0.0.1", 00:18:06.304 "trsvcid": "47116" 00:18:06.304 }, 00:18:06.304 "auth": { 00:18:06.304 "state": "completed", 00:18:06.304 "digest": "sha384", 00:18:06.304 "dhgroup": "ffdhe8192" 00:18:06.304 } 00:18:06.304 } 00:18:06.304 ]' 00:18:06.304 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.304 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.304 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.304 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.304 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.304 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.304 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.304 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.561 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:06.561 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:07.126 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.126 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:07.126 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.126 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.126 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.126 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.126 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:07.126 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.387 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.388 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.954 00:18:07.954 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.954 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.954 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.954 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.954 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.954 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.954 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.211 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.211 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.211 { 00:18:08.211 "cntlid": 95, 00:18:08.211 "qid": 0, 00:18:08.211 "state": "enabled", 00:18:08.211 "thread": "nvmf_tgt_poll_group_000", 00:18:08.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:08.211 "listen_address": { 00:18:08.211 "trtype": "TCP", 00:18:08.211 "adrfam": "IPv4", 00:18:08.211 "traddr": "10.0.0.2", 00:18:08.211 "trsvcid": "4420" 00:18:08.211 }, 00:18:08.211 "peer_address": { 00:18:08.212 "trtype": "TCP", 00:18:08.212 "adrfam": "IPv4", 00:18:08.212 "traddr": "10.0.0.1", 00:18:08.212 "trsvcid": "47142" 00:18:08.212 }, 00:18:08.212 "auth": { 00:18:08.212 "state": "completed", 00:18:08.212 "digest": "sha384", 00:18:08.212 "dhgroup": "ffdhe8192" 00:18:08.212 } 00:18:08.212 } 00:18:08.212 ]' 00:18:08.212 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.212 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.212 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.212 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.212 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.212 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.212 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.212 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.469 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:08.469 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:09.035 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.035 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:09.035 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.035 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.035 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.035 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:09.035 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.035 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.035 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:09.035 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.292 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.550 00:18:09.550 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.550 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.550 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.550 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.550 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.550 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.550 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.550 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.550 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.550 { 00:18:09.550 "cntlid": 97, 00:18:09.550 "qid": 0, 00:18:09.550 "state": "enabled", 00:18:09.550 "thread": "nvmf_tgt_poll_group_000", 00:18:09.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:09.550 "listen_address": { 00:18:09.550 "trtype": "TCP", 00:18:09.550 "adrfam": "IPv4", 00:18:09.550 "traddr": "10.0.0.2", 00:18:09.550 "trsvcid": "4420" 00:18:09.550 }, 00:18:09.550 "peer_address": { 00:18:09.550 "trtype": "TCP", 00:18:09.550 "adrfam": "IPv4", 00:18:09.550 "traddr": "10.0.0.1", 00:18:09.550 "trsvcid": "42942" 00:18:09.550 }, 00:18:09.550 "auth": { 00:18:09.550 "state": "completed", 00:18:09.550 "digest": "sha512", 00:18:09.550 "dhgroup": "null" 00:18:09.550 } 00:18:09.550 } 00:18:09.550 ]' 00:18:09.550 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.808 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.808 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.808 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:09.808 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.808 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.808 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.808 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.065 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:10.065 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:10.631 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.631 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:10.631 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.631 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.631 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.631 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.631 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:10.631 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.888 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.888 00:18:11.146 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.146 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.146 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.146 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.146 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.146 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.146 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.146 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.146 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.146 { 00:18:11.146 "cntlid": 99, 00:18:11.146 "qid": 0, 00:18:11.146 "state": "enabled", 00:18:11.146 "thread": "nvmf_tgt_poll_group_000", 00:18:11.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:11.146 "listen_address": { 00:18:11.146 "trtype": "TCP", 00:18:11.146 "adrfam": "IPv4", 00:18:11.146 "traddr": "10.0.0.2", 00:18:11.146 "trsvcid": "4420" 00:18:11.146 }, 00:18:11.146 "peer_address": { 00:18:11.146 "trtype": "TCP", 00:18:11.146 "adrfam": "IPv4", 00:18:11.146 "traddr": "10.0.0.1", 00:18:11.146 "trsvcid": "42976" 00:18:11.146 }, 00:18:11.146 "auth": { 00:18:11.146 "state": "completed", 00:18:11.146 "digest": "sha512", 00:18:11.146 "dhgroup": "null" 00:18:11.146 } 00:18:11.146 } 00:18:11.146 ]' 00:18:11.146 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.146 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.403 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.403 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:11.403 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.403 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.403 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.403 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.661 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:11.661 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:12.225 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.226 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:12.226 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.226 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.226 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.226 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.226 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:12.226 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:12.226 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:12.226 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.226 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.226 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:12.226 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.226 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.226 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.226 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.226 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.483 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.483 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.483 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.483 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.483 00:18:12.740 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.741 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.741 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.741 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.741 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.741 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.741 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.741 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.741 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.741 { 00:18:12.741 "cntlid": 101, 00:18:12.741 "qid": 0, 00:18:12.741 "state": "enabled", 00:18:12.741 "thread": "nvmf_tgt_poll_group_000", 00:18:12.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:12.741 "listen_address": { 00:18:12.741 "trtype": "TCP", 00:18:12.741 "adrfam": "IPv4", 00:18:12.741 "traddr": "10.0.0.2", 00:18:12.741 "trsvcid": "4420" 00:18:12.741 }, 00:18:12.741 "peer_address": { 00:18:12.741 "trtype": "TCP", 00:18:12.741 "adrfam": "IPv4", 00:18:12.741 "traddr": "10.0.0.1", 00:18:12.741 "trsvcid": "43004" 00:18:12.741 }, 00:18:12.741 "auth": { 00:18:12.741 "state": "completed", 00:18:12.741 "digest": "sha512", 00:18:12.741 "dhgroup": "null" 00:18:12.741 } 00:18:12.741 } 00:18:12.741 ]' 00:18:12.741 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.998 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.998 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.998 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:12.998 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.998 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.998 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.998 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.255 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:13.256 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.821 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.078 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.078 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.078 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.078 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.078 00:18:14.078 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.078 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.078 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.336 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.336 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.336 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.336 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.336 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.336 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.336 { 00:18:14.336 "cntlid": 103, 00:18:14.336 "qid": 0, 00:18:14.336 "state": "enabled", 00:18:14.336 "thread": "nvmf_tgt_poll_group_000", 00:18:14.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:14.336 "listen_address": { 00:18:14.336 "trtype": "TCP", 00:18:14.336 "adrfam": "IPv4", 00:18:14.336 "traddr": "10.0.0.2", 00:18:14.336 "trsvcid": "4420" 00:18:14.336 }, 00:18:14.336 "peer_address": { 00:18:14.336 "trtype": "TCP", 00:18:14.336 "adrfam": "IPv4", 00:18:14.336 "traddr": "10.0.0.1", 00:18:14.336 "trsvcid": "43038" 00:18:14.336 }, 00:18:14.336 "auth": { 00:18:14.336 "state": "completed", 00:18:14.336 "digest": "sha512", 00:18:14.336 "dhgroup": "null" 00:18:14.336 } 00:18:14.336 } 00:18:14.336 ]' 00:18:14.336 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.336 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.336 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.594 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:14.594 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.594 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.594 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.594 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.852 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:14.852 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.418 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.675 00:18:15.675 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.676 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.676 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.933 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.933 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.933 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.933 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.933 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.933 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.933 { 00:18:15.933 "cntlid": 105, 00:18:15.933 "qid": 0, 00:18:15.933 "state": "enabled", 00:18:15.933 "thread": "nvmf_tgt_poll_group_000", 00:18:15.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:15.933 "listen_address": { 00:18:15.933 "trtype": "TCP", 00:18:15.933 "adrfam": "IPv4", 00:18:15.933 "traddr": "10.0.0.2", 00:18:15.933 "trsvcid": "4420" 00:18:15.933 }, 00:18:15.933 "peer_address": { 00:18:15.933 "trtype": "TCP", 00:18:15.933 "adrfam": "IPv4", 00:18:15.933 "traddr": "10.0.0.1", 00:18:15.933 "trsvcid": "43068" 00:18:15.933 }, 00:18:15.933 "auth": { 00:18:15.933 "state": "completed", 00:18:15.933 "digest": "sha512", 00:18:15.933 "dhgroup": "ffdhe2048" 00:18:15.933 } 00:18:15.933 } 00:18:15.933 ]' 00:18:15.933 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.933 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.933 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.191 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.191 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.191 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.191 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.191 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.449 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:16.449 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.014 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.271 00:18:17.271 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.271 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.271 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.528 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.528 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.528 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.528 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.528 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.528 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.528 { 00:18:17.528 "cntlid": 107, 00:18:17.528 "qid": 0, 00:18:17.528 "state": "enabled", 00:18:17.528 "thread": "nvmf_tgt_poll_group_000", 00:18:17.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:17.528 "listen_address": { 00:18:17.528 "trtype": "TCP", 00:18:17.528 "adrfam": "IPv4", 00:18:17.528 "traddr": "10.0.0.2", 00:18:17.528 "trsvcid": "4420" 00:18:17.528 }, 00:18:17.528 "peer_address": { 00:18:17.528 "trtype": "TCP", 00:18:17.528 "adrfam": "IPv4", 00:18:17.528 "traddr": "10.0.0.1", 00:18:17.528 "trsvcid": "43106" 00:18:17.528 }, 00:18:17.528 "auth": { 00:18:17.528 "state": "completed", 00:18:17.528 "digest": "sha512", 00:18:17.528 "dhgroup": "ffdhe2048" 00:18:17.528 } 00:18:17.528 } 00:18:17.528 ]' 00:18:17.528 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.528 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.528 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.785 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.786 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.786 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.786 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.786 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.044 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:18.044 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:18.608 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.608 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.609 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.866 00:18:18.866 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.866 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.866 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.124 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.124 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.124 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.124 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.124 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.124 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.124 { 00:18:19.124 "cntlid": 109, 00:18:19.124 "qid": 0, 00:18:19.124 "state": "enabled", 00:18:19.124 "thread": "nvmf_tgt_poll_group_000", 00:18:19.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:19.124 "listen_address": { 00:18:19.124 "trtype": "TCP", 00:18:19.124 "adrfam": "IPv4", 00:18:19.124 "traddr": "10.0.0.2", 00:18:19.124 "trsvcid": "4420" 00:18:19.124 }, 00:18:19.124 "peer_address": { 00:18:19.124 "trtype": "TCP", 00:18:19.124 "adrfam": "IPv4", 00:18:19.124 "traddr": "10.0.0.1", 00:18:19.124 "trsvcid": "43132" 00:18:19.124 }, 00:18:19.124 "auth": { 00:18:19.124 "state": "completed", 00:18:19.124 "digest": "sha512", 00:18:19.124 "dhgroup": "ffdhe2048" 00:18:19.124 } 00:18:19.124 } 00:18:19.124 ]' 00:18:19.124 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.124 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.124 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.124 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.124 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.381 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.381 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.381 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.381 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:19.381 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:19.946 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.946 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:19.946 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.946 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.946 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.946 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.946 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:19.946 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.204 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.462 00:18:20.462 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.462 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.462 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.719 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.719 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.719 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.719 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.719 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.719 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.719 { 00:18:20.719 "cntlid": 111, 00:18:20.719 "qid": 0, 00:18:20.719 "state": "enabled", 00:18:20.719 "thread": "nvmf_tgt_poll_group_000", 00:18:20.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:20.719 "listen_address": { 00:18:20.719 "trtype": "TCP", 00:18:20.719 "adrfam": "IPv4", 00:18:20.719 "traddr": "10.0.0.2", 00:18:20.719 "trsvcid": "4420" 00:18:20.719 }, 00:18:20.719 "peer_address": { 00:18:20.719 "trtype": "TCP", 00:18:20.719 "adrfam": "IPv4", 00:18:20.719 "traddr": "10.0.0.1", 00:18:20.719 "trsvcid": "34496" 00:18:20.719 }, 00:18:20.719 "auth": { 00:18:20.719 "state": "completed", 00:18:20.719 "digest": "sha512", 00:18:20.719 "dhgroup": "ffdhe2048" 00:18:20.719 } 00:18:20.719 } 00:18:20.719 ]' 00:18:20.719 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.719 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.719 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.719 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.720 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.720 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.720 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.720 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.978 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:20.978 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:21.542 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.542 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:21.542 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.542 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.542 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.542 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.542 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.542 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:21.542 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:21.799 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:21.799 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.799 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.799 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:21.799 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:21.800 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.800 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.800 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.800 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.800 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.800 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.800 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.800 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.057 00:18:22.057 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.057 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.057 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.314 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.314 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.314 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.314 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.314 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.314 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.314 { 00:18:22.314 "cntlid": 113, 00:18:22.314 "qid": 0, 00:18:22.314 "state": "enabled", 00:18:22.315 "thread": "nvmf_tgt_poll_group_000", 00:18:22.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:22.315 "listen_address": { 00:18:22.315 "trtype": "TCP", 00:18:22.315 "adrfam": "IPv4", 00:18:22.315 "traddr": "10.0.0.2", 00:18:22.315 "trsvcid": "4420" 00:18:22.315 }, 00:18:22.315 "peer_address": { 00:18:22.315 "trtype": "TCP", 00:18:22.315 "adrfam": "IPv4", 00:18:22.315 "traddr": "10.0.0.1", 00:18:22.315 "trsvcid": "34526" 00:18:22.315 }, 00:18:22.315 "auth": { 00:18:22.315 "state": "completed", 00:18:22.315 "digest": "sha512", 00:18:22.315 "dhgroup": "ffdhe3072" 00:18:22.315 } 00:18:22.315 } 00:18:22.315 ]' 00:18:22.315 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.315 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.315 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.315 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.315 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.315 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.315 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.315 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.573 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:22.573 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:23.138 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.138 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:23.138 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.138 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.138 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.138 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.138 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:23.138 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.396 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.653 00:18:23.653 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.653 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.653 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.911 { 00:18:23.911 "cntlid": 115, 00:18:23.911 "qid": 0, 00:18:23.911 "state": "enabled", 00:18:23.911 "thread": "nvmf_tgt_poll_group_000", 00:18:23.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:23.911 "listen_address": { 00:18:23.911 "trtype": "TCP", 00:18:23.911 "adrfam": "IPv4", 00:18:23.911 "traddr": "10.0.0.2", 00:18:23.911 "trsvcid": "4420" 00:18:23.911 }, 00:18:23.911 "peer_address": { 00:18:23.911 "trtype": "TCP", 00:18:23.911 "adrfam": "IPv4", 00:18:23.911 "traddr": "10.0.0.1", 00:18:23.911 "trsvcid": "34534" 00:18:23.911 }, 00:18:23.911 "auth": { 00:18:23.911 "state": "completed", 00:18:23.911 "digest": "sha512", 00:18:23.911 "dhgroup": "ffdhe3072" 00:18:23.911 } 00:18:23.911 } 00:18:23.911 ]' 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.911 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.168 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:24.168 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:24.732 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.733 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:24.733 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.733 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.733 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.733 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.733 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:24.733 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.990 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.248 00:18:25.248 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.248 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.248 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.506 { 00:18:25.506 "cntlid": 117, 00:18:25.506 "qid": 0, 00:18:25.506 "state": "enabled", 00:18:25.506 "thread": "nvmf_tgt_poll_group_000", 00:18:25.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:25.506 "listen_address": { 00:18:25.506 "trtype": "TCP", 00:18:25.506 "adrfam": "IPv4", 00:18:25.506 "traddr": "10.0.0.2", 00:18:25.506 "trsvcid": "4420" 00:18:25.506 }, 00:18:25.506 "peer_address": { 00:18:25.506 "trtype": "TCP", 00:18:25.506 "adrfam": "IPv4", 00:18:25.506 "traddr": "10.0.0.1", 00:18:25.506 "trsvcid": "34554" 00:18:25.506 }, 00:18:25.506 "auth": { 00:18:25.506 "state": "completed", 00:18:25.506 "digest": "sha512", 00:18:25.506 "dhgroup": "ffdhe3072" 00:18:25.506 } 00:18:25.506 } 00:18:25.506 ]' 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.506 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.764 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:25.764 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:26.329 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.329 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:26.330 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.330 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.330 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.330 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.330 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:26.330 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.587 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.845 00:18:26.845 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.845 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.845 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.845 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.845 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.845 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.845 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.103 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.103 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.103 { 00:18:27.103 "cntlid": 119, 00:18:27.103 "qid": 0, 00:18:27.103 "state": "enabled", 00:18:27.103 "thread": "nvmf_tgt_poll_group_000", 00:18:27.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:27.103 "listen_address": { 00:18:27.103 "trtype": "TCP", 00:18:27.103 "adrfam": "IPv4", 00:18:27.103 "traddr": "10.0.0.2", 00:18:27.103 "trsvcid": "4420" 00:18:27.103 }, 00:18:27.103 "peer_address": { 00:18:27.103 "trtype": "TCP", 00:18:27.103 "adrfam": "IPv4", 00:18:27.103 "traddr": "10.0.0.1", 00:18:27.103 "trsvcid": "34568" 00:18:27.103 }, 00:18:27.103 "auth": { 00:18:27.103 "state": "completed", 00:18:27.103 "digest": "sha512", 00:18:27.103 "dhgroup": "ffdhe3072" 00:18:27.103 } 00:18:27.103 } 00:18:27.103 ]' 00:18:27.103 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.103 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.103 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.103 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.103 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.103 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.103 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.103 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.362 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:27.362 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:27.928 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.928 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:27.928 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.928 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.928 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.928 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.928 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.928 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:27.928 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.185 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.443 00:18:28.443 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.443 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.443 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.701 { 00:18:28.701 "cntlid": 121, 00:18:28.701 "qid": 0, 00:18:28.701 "state": "enabled", 00:18:28.701 "thread": "nvmf_tgt_poll_group_000", 00:18:28.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:28.701 "listen_address": { 00:18:28.701 "trtype": "TCP", 00:18:28.701 "adrfam": "IPv4", 00:18:28.701 "traddr": "10.0.0.2", 00:18:28.701 "trsvcid": "4420" 00:18:28.701 }, 00:18:28.701 "peer_address": { 00:18:28.701 "trtype": "TCP", 00:18:28.701 "adrfam": "IPv4", 00:18:28.701 "traddr": "10.0.0.1", 00:18:28.701 "trsvcid": "34600" 00:18:28.701 }, 00:18:28.701 "auth": { 00:18:28.701 "state": "completed", 00:18:28.701 "digest": "sha512", 00:18:28.701 "dhgroup": "ffdhe4096" 00:18:28.701 } 00:18:28.701 } 00:18:28.701 ]' 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.701 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.959 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:28.959 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:29.523 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.523 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:29.523 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.523 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.523 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.523 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.523 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:29.523 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.781 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.039 00:18:30.039 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.039 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.039 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.296 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.296 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.296 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.296 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.296 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.296 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.296 { 00:18:30.296 "cntlid": 123, 00:18:30.296 "qid": 0, 00:18:30.296 "state": "enabled", 00:18:30.296 "thread": "nvmf_tgt_poll_group_000", 00:18:30.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:30.296 "listen_address": { 00:18:30.296 "trtype": "TCP", 00:18:30.296 "adrfam": "IPv4", 00:18:30.296 "traddr": "10.0.0.2", 00:18:30.296 "trsvcid": "4420" 00:18:30.296 }, 00:18:30.296 "peer_address": { 00:18:30.296 "trtype": "TCP", 00:18:30.296 "adrfam": "IPv4", 00:18:30.296 "traddr": "10.0.0.1", 00:18:30.296 "trsvcid": "58244" 00:18:30.296 }, 00:18:30.296 "auth": { 00:18:30.296 "state": "completed", 00:18:30.296 "digest": "sha512", 00:18:30.296 "dhgroup": "ffdhe4096" 00:18:30.296 } 00:18:30.296 } 00:18:30.296 ]' 00:18:30.296 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.296 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.296 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.296 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.296 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.296 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.296 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.296 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.553 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:30.553 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:31.118 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.118 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:31.118 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.118 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.118 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.118 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.118 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:31.118 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.376 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.633 00:18:31.633 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.633 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.633 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.891 { 00:18:31.891 "cntlid": 125, 00:18:31.891 "qid": 0, 00:18:31.891 "state": "enabled", 00:18:31.891 "thread": "nvmf_tgt_poll_group_000", 00:18:31.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:31.891 "listen_address": { 00:18:31.891 "trtype": "TCP", 00:18:31.891 "adrfam": "IPv4", 00:18:31.891 "traddr": "10.0.0.2", 00:18:31.891 "trsvcid": "4420" 00:18:31.891 }, 00:18:31.891 "peer_address": { 00:18:31.891 "trtype": "TCP", 00:18:31.891 "adrfam": "IPv4", 00:18:31.891 "traddr": "10.0.0.1", 00:18:31.891 "trsvcid": "58280" 00:18:31.891 }, 00:18:31.891 "auth": { 00:18:31.891 "state": "completed", 00:18:31.891 "digest": "sha512", 00:18:31.891 "dhgroup": "ffdhe4096" 00:18:31.891 } 00:18:31.891 } 00:18:31.891 ]' 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.891 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.149 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:32.149 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:32.714 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.714 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:32.714 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.714 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.714 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.714 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.714 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:32.714 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.971 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.229 00:18:33.229 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.229 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.229 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.487 { 00:18:33.487 "cntlid": 127, 00:18:33.487 "qid": 0, 00:18:33.487 "state": "enabled", 00:18:33.487 "thread": "nvmf_tgt_poll_group_000", 00:18:33.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:33.487 "listen_address": { 00:18:33.487 "trtype": "TCP", 00:18:33.487 "adrfam": "IPv4", 00:18:33.487 "traddr": "10.0.0.2", 00:18:33.487 "trsvcid": "4420" 00:18:33.487 }, 00:18:33.487 "peer_address": { 00:18:33.487 "trtype": "TCP", 00:18:33.487 "adrfam": "IPv4", 00:18:33.487 "traddr": "10.0.0.1", 00:18:33.487 "trsvcid": "58304" 00:18:33.487 }, 00:18:33.487 "auth": { 00:18:33.487 "state": "completed", 00:18:33.487 "digest": "sha512", 00:18:33.487 "dhgroup": "ffdhe4096" 00:18:33.487 } 00:18:33.487 } 00:18:33.487 ]' 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.487 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.745 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:33.745 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:34.310 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.310 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:34.310 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.310 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.310 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.310 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.310 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.310 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:34.310 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.567 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.825 00:18:34.825 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.825 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.825 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.083 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.083 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.083 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.083 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.083 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.083 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.083 { 00:18:35.083 "cntlid": 129, 00:18:35.083 "qid": 0, 00:18:35.083 "state": "enabled", 00:18:35.083 "thread": "nvmf_tgt_poll_group_000", 00:18:35.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:35.083 "listen_address": { 00:18:35.083 "trtype": "TCP", 00:18:35.083 "adrfam": "IPv4", 00:18:35.083 "traddr": "10.0.0.2", 00:18:35.083 "trsvcid": "4420" 00:18:35.083 }, 00:18:35.083 "peer_address": { 00:18:35.083 "trtype": "TCP", 00:18:35.083 "adrfam": "IPv4", 00:18:35.083 "traddr": "10.0.0.1", 00:18:35.083 "trsvcid": "58326" 00:18:35.083 }, 00:18:35.083 "auth": { 00:18:35.083 "state": "completed", 00:18:35.083 "digest": "sha512", 00:18:35.083 "dhgroup": "ffdhe6144" 00:18:35.083 } 00:18:35.083 } 00:18:35.083 ]' 00:18:35.083 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.083 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.083 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.341 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:35.341 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.341 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.341 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.341 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.599 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:35.599 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:36.165 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.165 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:36.165 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.165 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.165 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.165 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.165 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:36.165 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.165 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.730 00:18:36.730 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.730 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.730 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.730 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.730 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.730 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.730 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.730 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.730 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.730 { 00:18:36.730 "cntlid": 131, 00:18:36.730 "qid": 0, 00:18:36.730 "state": "enabled", 00:18:36.730 "thread": "nvmf_tgt_poll_group_000", 00:18:36.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:36.730 "listen_address": { 00:18:36.730 "trtype": "TCP", 00:18:36.730 "adrfam": "IPv4", 00:18:36.730 "traddr": "10.0.0.2", 00:18:36.730 "trsvcid": "4420" 00:18:36.730 }, 00:18:36.730 "peer_address": { 00:18:36.730 "trtype": "TCP", 00:18:36.730 "adrfam": "IPv4", 00:18:36.730 "traddr": "10.0.0.1", 00:18:36.730 "trsvcid": "58364" 00:18:36.730 }, 00:18:36.730 "auth": { 00:18:36.730 "state": "completed", 00:18:36.730 "digest": "sha512", 00:18:36.730 "dhgroup": "ffdhe6144" 00:18:36.730 } 00:18:36.730 } 00:18:36.730 ]' 00:18:36.730 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.988 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.988 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.988 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:36.988 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.988 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.988 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.988 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.245 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:37.245 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.810 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.376 00:18:38.376 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.376 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.376 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.376 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.376 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.376 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.376 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.376 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.376 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.376 { 00:18:38.376 "cntlid": 133, 00:18:38.376 "qid": 0, 00:18:38.376 "state": "enabled", 00:18:38.376 "thread": "nvmf_tgt_poll_group_000", 00:18:38.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:38.376 "listen_address": { 00:18:38.376 "trtype": "TCP", 00:18:38.376 "adrfam": "IPv4", 00:18:38.376 "traddr": "10.0.0.2", 00:18:38.376 "trsvcid": "4420" 00:18:38.376 }, 00:18:38.376 "peer_address": { 00:18:38.376 "trtype": "TCP", 00:18:38.376 "adrfam": "IPv4", 00:18:38.376 "traddr": "10.0.0.1", 00:18:38.376 "trsvcid": "58390" 00:18:38.376 }, 00:18:38.376 "auth": { 00:18:38.376 "state": "completed", 00:18:38.376 "digest": "sha512", 00:18:38.376 "dhgroup": "ffdhe6144" 00:18:38.376 } 00:18:38.376 } 00:18:38.376 ]' 00:18:38.376 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.633 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.633 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.633 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.633 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.633 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.633 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.633 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.891 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:38.891 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.456 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.714 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.714 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.714 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.714 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.971 00:18:39.971 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.971 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.971 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.229 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.229 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.229 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.229 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.229 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.229 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.229 { 00:18:40.229 "cntlid": 135, 00:18:40.229 "qid": 0, 00:18:40.229 "state": "enabled", 00:18:40.229 "thread": "nvmf_tgt_poll_group_000", 00:18:40.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:40.229 "listen_address": { 00:18:40.229 "trtype": "TCP", 00:18:40.229 "adrfam": "IPv4", 00:18:40.229 "traddr": "10.0.0.2", 00:18:40.229 "trsvcid": "4420" 00:18:40.229 }, 00:18:40.229 "peer_address": { 00:18:40.229 "trtype": "TCP", 00:18:40.229 "adrfam": "IPv4", 00:18:40.229 "traddr": "10.0.0.1", 00:18:40.229 "trsvcid": "52844" 00:18:40.229 }, 00:18:40.229 "auth": { 00:18:40.229 "state": "completed", 00:18:40.229 "digest": "sha512", 00:18:40.229 "dhgroup": "ffdhe6144" 00:18:40.229 } 00:18:40.229 } 00:18:40.229 ]' 00:18:40.229 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.229 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.229 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.229 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.229 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.229 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.229 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.229 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.487 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:40.487 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:41.052 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.052 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:41.052 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.052 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.052 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.052 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.052 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.052 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:41.052 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.310 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.875 00:18:41.875 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.875 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.875 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.875 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.875 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.875 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.875 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.875 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.875 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.875 { 00:18:41.875 "cntlid": 137, 00:18:41.875 "qid": 0, 00:18:41.875 "state": "enabled", 00:18:41.875 "thread": "nvmf_tgt_poll_group_000", 00:18:41.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:41.875 "listen_address": { 00:18:41.875 "trtype": "TCP", 00:18:41.875 "adrfam": "IPv4", 00:18:41.875 "traddr": "10.0.0.2", 00:18:41.875 "trsvcid": "4420" 00:18:41.875 }, 00:18:41.875 "peer_address": { 00:18:41.875 "trtype": "TCP", 00:18:41.875 "adrfam": "IPv4", 00:18:41.875 "traddr": "10.0.0.1", 00:18:41.875 "trsvcid": "52870" 00:18:41.875 }, 00:18:41.875 "auth": { 00:18:41.875 "state": "completed", 00:18:41.875 "digest": "sha512", 00:18:41.875 "dhgroup": "ffdhe8192" 00:18:41.875 } 00:18:41.875 } 00:18:41.875 ]' 00:18:41.875 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.875 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.134 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.134 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.134 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.134 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.134 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.134 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.391 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:42.391 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.956 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.521 00:18:43.521 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.521 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.521 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.780 { 00:18:43.780 "cntlid": 139, 00:18:43.780 "qid": 0, 00:18:43.780 "state": "enabled", 00:18:43.780 "thread": "nvmf_tgt_poll_group_000", 00:18:43.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:43.780 "listen_address": { 00:18:43.780 "trtype": "TCP", 00:18:43.780 "adrfam": "IPv4", 00:18:43.780 "traddr": "10.0.0.2", 00:18:43.780 "trsvcid": "4420" 00:18:43.780 }, 00:18:43.780 "peer_address": { 00:18:43.780 "trtype": "TCP", 00:18:43.780 "adrfam": "IPv4", 00:18:43.780 "traddr": "10.0.0.1", 00:18:43.780 "trsvcid": "52896" 00:18:43.780 }, 00:18:43.780 "auth": { 00:18:43.780 "state": "completed", 00:18:43.780 "digest": "sha512", 00:18:43.780 "dhgroup": "ffdhe8192" 00:18:43.780 } 00:18:43.780 } 00:18:43.780 ]' 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.780 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.037 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:44.038 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: --dhchap-ctrl-secret DHHC-1:02:MDAyNGU3MzkzNDE4ZTYwYTYyYTAzOWZlZWM3MGQwZjI1NjA1MjBhYmQyMWM2NDc2XNON1A==: 00:18:44.602 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.602 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:44.602 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.602 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.602 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.602 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.602 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:44.602 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.860 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.424 00:18:45.424 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.424 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.424 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.424 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.424 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.424 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.424 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.681 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.681 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.681 { 00:18:45.681 "cntlid": 141, 00:18:45.681 "qid": 0, 00:18:45.681 "state": "enabled", 00:18:45.681 "thread": "nvmf_tgt_poll_group_000", 00:18:45.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:45.681 "listen_address": { 00:18:45.681 "trtype": "TCP", 00:18:45.681 "adrfam": "IPv4", 00:18:45.681 "traddr": "10.0.0.2", 00:18:45.681 "trsvcid": "4420" 00:18:45.681 }, 00:18:45.681 "peer_address": { 00:18:45.681 "trtype": "TCP", 00:18:45.681 "adrfam": "IPv4", 00:18:45.681 "traddr": "10.0.0.1", 00:18:45.681 "trsvcid": "52944" 00:18:45.681 }, 00:18:45.681 "auth": { 00:18:45.681 "state": "completed", 00:18:45.681 "digest": "sha512", 00:18:45.681 "dhgroup": "ffdhe8192" 00:18:45.681 } 00:18:45.681 } 00:18:45.681 ]' 00:18:45.681 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.681 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.681 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.681 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.681 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.681 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.681 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.681 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.939 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:45.939 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:01:MWI2MDViODg3Y2Y1ZjRhMjE1MzUwMWFmMTdjMDJkN2Nqo24m: 00:18:46.504 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.504 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:46.504 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.504 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.504 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.504 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.504 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:46.504 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:46.762 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.020 00:18:47.020 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.020 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.020 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.277 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.277 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.277 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.277 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.277 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.278 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.278 { 00:18:47.278 "cntlid": 143, 00:18:47.278 "qid": 0, 00:18:47.278 "state": "enabled", 00:18:47.278 "thread": "nvmf_tgt_poll_group_000", 00:18:47.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:47.278 "listen_address": { 00:18:47.278 "trtype": "TCP", 00:18:47.278 "adrfam": "IPv4", 00:18:47.278 "traddr": "10.0.0.2", 00:18:47.278 "trsvcid": "4420" 00:18:47.278 }, 00:18:47.278 "peer_address": { 00:18:47.278 "trtype": "TCP", 00:18:47.278 "adrfam": "IPv4", 00:18:47.278 "traddr": "10.0.0.1", 00:18:47.278 "trsvcid": "52974" 00:18:47.278 }, 00:18:47.278 "auth": { 00:18:47.278 "state": "completed", 00:18:47.278 "digest": "sha512", 00:18:47.278 "dhgroup": "ffdhe8192" 00:18:47.278 } 00:18:47.278 } 00:18:47.278 ]' 00:18:47.278 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.278 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.278 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.535 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.535 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.535 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.535 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.535 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.792 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:47.792 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:48.357 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.357 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.923 00:18:48.923 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.923 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.923 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.180 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.180 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.180 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.180 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.180 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.180 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.180 { 00:18:49.180 "cntlid": 145, 00:18:49.180 "qid": 0, 00:18:49.180 "state": "enabled", 00:18:49.180 "thread": "nvmf_tgt_poll_group_000", 00:18:49.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:49.180 "listen_address": { 00:18:49.180 "trtype": "TCP", 00:18:49.180 "adrfam": "IPv4", 00:18:49.180 "traddr": "10.0.0.2", 00:18:49.180 "trsvcid": "4420" 00:18:49.180 }, 00:18:49.180 "peer_address": { 00:18:49.180 "trtype": "TCP", 00:18:49.180 "adrfam": "IPv4", 00:18:49.180 "traddr": "10.0.0.1", 00:18:49.180 "trsvcid": "53006" 00:18:49.180 }, 00:18:49.180 "auth": { 00:18:49.180 "state": "completed", 00:18:49.180 "digest": "sha512", 00:18:49.180 "dhgroup": "ffdhe8192" 00:18:49.180 } 00:18:49.180 } 00:18:49.180 ]' 00:18:49.180 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.180 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.180 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.180 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:49.180 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.180 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.180 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.180 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.438 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:49.439 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGE0ODdhYjVmNDY5YmUwZGUzZjY3MWRlZTI1ZjA2MDEzZjY2YTJjMTY0NDQ3YTY3oYgFag==: --dhchap-ctrl-secret DHHC-1:03:NGVlMmIyOWI2ZDQwNjQyZDdmYzhmYTRiYTg3NDhlMWJjM2YzMWIxYWYyZGFmZGEzMTBhMmNkMzQzNzc4YjdkNyWVtko=: 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:50.004 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:50.570 request: 00:18:50.570 { 00:18:50.570 "name": "nvme0", 00:18:50.570 "trtype": "tcp", 00:18:50.570 "traddr": "10.0.0.2", 00:18:50.570 "adrfam": "ipv4", 00:18:50.570 "trsvcid": "4420", 00:18:50.570 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:50.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:50.570 "prchk_reftag": false, 00:18:50.570 "prchk_guard": false, 00:18:50.570 "hdgst": false, 00:18:50.570 "ddgst": false, 00:18:50.570 "dhchap_key": "key2", 00:18:50.570 "allow_unrecognized_csi": false, 00:18:50.570 "method": "bdev_nvme_attach_controller", 00:18:50.570 "req_id": 1 00:18:50.570 } 00:18:50.570 Got JSON-RPC error response 00:18:50.570 response: 00:18:50.570 { 00:18:50.570 "code": -5, 00:18:50.570 "message": "Input/output error" 00:18:50.570 } 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:50.570 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.135 request: 00:18:51.135 { 00:18:51.135 "name": "nvme0", 00:18:51.135 "trtype": "tcp", 00:18:51.135 "traddr": "10.0.0.2", 00:18:51.135 "adrfam": "ipv4", 00:18:51.135 "trsvcid": "4420", 00:18:51.135 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:51.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:51.135 "prchk_reftag": false, 00:18:51.135 "prchk_guard": false, 00:18:51.135 "hdgst": false, 00:18:51.135 "ddgst": false, 00:18:51.135 "dhchap_key": "key1", 00:18:51.135 "dhchap_ctrlr_key": "ckey2", 00:18:51.135 "allow_unrecognized_csi": false, 00:18:51.135 "method": "bdev_nvme_attach_controller", 00:18:51.135 "req_id": 1 00:18:51.135 } 00:18:51.135 Got JSON-RPC error response 00:18:51.135 response: 00:18:51.135 { 00:18:51.135 "code": -5, 00:18:51.135 "message": "Input/output error" 00:18:51.135 } 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.135 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.393 request: 00:18:51.393 { 00:18:51.393 "name": "nvme0", 00:18:51.393 "trtype": "tcp", 00:18:51.393 "traddr": "10.0.0.2", 00:18:51.393 "adrfam": "ipv4", 00:18:51.393 "trsvcid": "4420", 00:18:51.393 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:51.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:51.393 "prchk_reftag": false, 00:18:51.393 "prchk_guard": false, 00:18:51.393 "hdgst": false, 00:18:51.393 "ddgst": false, 00:18:51.393 "dhchap_key": "key1", 00:18:51.393 "dhchap_ctrlr_key": "ckey1", 00:18:51.393 "allow_unrecognized_csi": false, 00:18:51.393 "method": "bdev_nvme_attach_controller", 00:18:51.393 "req_id": 1 00:18:51.394 } 00:18:51.394 Got JSON-RPC error response 00:18:51.394 response: 00:18:51.394 { 00:18:51.394 "code": -5, 00:18:51.394 "message": "Input/output error" 00:18:51.394 } 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1172380 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1172380 ']' 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1172380 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1172380 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1172380' 00:18:51.394 killing process with pid 1172380 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1172380 00:18:51.394 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1172380 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1194230 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1194230 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1194230 ']' 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.652 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1194230 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1194230 ']' 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.910 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.169 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.169 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:52.169 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:52.169 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.169 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.169 null0 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.A3M 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.UQF ]] 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UQF 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.sMg 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.JcF ]] 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JcF 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.MXA 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.CzX ]] 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CzX 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.169 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.YrL 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.427 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.994 nvme0n1 00:18:52.994 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.994 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.994 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.251 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.251 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.251 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.251 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.251 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.251 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.251 { 00:18:53.251 "cntlid": 1, 00:18:53.251 "qid": 0, 00:18:53.251 "state": "enabled", 00:18:53.251 "thread": "nvmf_tgt_poll_group_000", 00:18:53.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:53.251 "listen_address": { 00:18:53.251 "trtype": "TCP", 00:18:53.251 "adrfam": "IPv4", 00:18:53.251 "traddr": "10.0.0.2", 00:18:53.251 "trsvcid": "4420" 00:18:53.251 }, 00:18:53.251 "peer_address": { 00:18:53.251 "trtype": "TCP", 00:18:53.251 "adrfam": "IPv4", 00:18:53.251 "traddr": "10.0.0.1", 00:18:53.251 "trsvcid": "33000" 00:18:53.251 }, 00:18:53.251 "auth": { 00:18:53.251 "state": "completed", 00:18:53.251 "digest": "sha512", 00:18:53.251 "dhgroup": "ffdhe8192" 00:18:53.251 } 00:18:53.251 } 00:18:53.251 ]' 00:18:53.251 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.251 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.251 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.251 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:53.251 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.509 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.509 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.509 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.510 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:53.510 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:54.075 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.075 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:54.075 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.075 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.075 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.075 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:18:54.075 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.075 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.075 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.075 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:54.075 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:54.333 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:54.333 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:54.333 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:54.333 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:54.333 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.333 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:54.333 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.333 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.333 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.333 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.591 request: 00:18:54.591 { 00:18:54.591 "name": "nvme0", 00:18:54.591 "trtype": "tcp", 00:18:54.591 "traddr": "10.0.0.2", 00:18:54.591 "adrfam": "ipv4", 00:18:54.591 "trsvcid": "4420", 00:18:54.591 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:54.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:54.592 "prchk_reftag": false, 00:18:54.592 "prchk_guard": false, 00:18:54.592 "hdgst": false, 00:18:54.592 "ddgst": false, 00:18:54.592 "dhchap_key": "key3", 00:18:54.592 "allow_unrecognized_csi": false, 00:18:54.592 "method": "bdev_nvme_attach_controller", 00:18:54.592 "req_id": 1 00:18:54.592 } 00:18:54.592 Got JSON-RPC error response 00:18:54.592 response: 00:18:54.592 { 00:18:54.592 "code": -5, 00:18:54.592 "message": "Input/output error" 00:18:54.592 } 00:18:54.592 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:54.592 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.592 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.592 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.592 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:54.592 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:54.592 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:54.592 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:54.849 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:54.849 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:54.849 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:54.849 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:54.849 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.850 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:54.850 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.850 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.850 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.850 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.850 request: 00:18:54.850 { 00:18:54.850 "name": "nvme0", 00:18:54.850 "trtype": "tcp", 00:18:54.850 "traddr": "10.0.0.2", 00:18:54.850 "adrfam": "ipv4", 00:18:54.850 "trsvcid": "4420", 00:18:54.850 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:54.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:54.850 "prchk_reftag": false, 00:18:54.850 "prchk_guard": false, 00:18:54.850 "hdgst": false, 00:18:54.850 "ddgst": false, 00:18:54.850 "dhchap_key": "key3", 00:18:54.850 "allow_unrecognized_csi": false, 00:18:54.850 "method": "bdev_nvme_attach_controller", 00:18:54.850 "req_id": 1 00:18:54.850 } 00:18:54.850 Got JSON-RPC error response 00:18:54.850 response: 00:18:54.850 { 00:18:54.850 "code": -5, 00:18:54.850 "message": "Input/output error" 00:18:54.850 } 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.108 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.673 request: 00:18:55.673 { 00:18:55.673 "name": "nvme0", 00:18:55.673 "trtype": "tcp", 00:18:55.673 "traddr": "10.0.0.2", 00:18:55.673 "adrfam": "ipv4", 00:18:55.673 "trsvcid": "4420", 00:18:55.673 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:55.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:55.673 "prchk_reftag": false, 00:18:55.673 "prchk_guard": false, 00:18:55.673 "hdgst": false, 00:18:55.673 "ddgst": false, 00:18:55.673 "dhchap_key": "key0", 00:18:55.673 "dhchap_ctrlr_key": "key1", 00:18:55.673 "allow_unrecognized_csi": false, 00:18:55.673 "method": "bdev_nvme_attach_controller", 00:18:55.673 "req_id": 1 00:18:55.673 } 00:18:55.673 Got JSON-RPC error response 00:18:55.673 response: 00:18:55.673 { 00:18:55.673 "code": -5, 00:18:55.673 "message": "Input/output error" 00:18:55.673 } 00:18:55.673 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:55.673 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.673 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.673 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.673 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:55.673 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:55.673 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:55.673 nvme0n1 00:18:55.931 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:55.931 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:55.931 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.931 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.931 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.931 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.188 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:18:56.188 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.188 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.188 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.188 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:56.188 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:56.188 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:57.121 nvme0n1 00:18:57.121 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:57.121 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:57.121 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.121 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.121 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:57.121 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.121 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.121 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.121 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:57.121 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:57.121 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.382 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.382 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:57.382 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: --dhchap-ctrl-secret DHHC-1:03:NWY0ZDdmZGZhNzFhMTI2OTcyMzJjMDlkMjRmYTEwM2E4ODFiNWM4MzAyODQ3YmNhMDU0MjRiYWVlMGUxNTcxMdab2oA=: 00:18:57.947 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:57.947 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:57.947 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:57.947 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:57.947 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:57.947 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:57.947 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:57.948 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.948 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.205 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:58.205 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:58.205 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:58.205 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:58.205 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.205 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:58.205 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.205 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:58.205 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:58.205 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:58.463 request: 00:18:58.463 { 00:18:58.463 "name": "nvme0", 00:18:58.463 "trtype": "tcp", 00:18:58.463 "traddr": "10.0.0.2", 00:18:58.463 "adrfam": "ipv4", 00:18:58.463 "trsvcid": "4420", 00:18:58.463 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:58.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:18:58.463 "prchk_reftag": false, 00:18:58.463 "prchk_guard": false, 00:18:58.463 "hdgst": false, 00:18:58.463 "ddgst": false, 00:18:58.463 "dhchap_key": "key1", 00:18:58.463 "allow_unrecognized_csi": false, 00:18:58.463 "method": "bdev_nvme_attach_controller", 00:18:58.463 "req_id": 1 00:18:58.463 } 00:18:58.463 Got JSON-RPC error response 00:18:58.463 response: 00:18:58.463 { 00:18:58.463 "code": -5, 00:18:58.463 "message": "Input/output error" 00:18:58.463 } 00:18:58.463 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:58.463 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.463 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.463 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.463 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:58.463 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:58.463 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:59.455 nvme0n1 00:18:59.455 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:59.455 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:59.455 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.455 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.455 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.455 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.772 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:59.772 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.772 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.772 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.772 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:59.772 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:59.772 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:00.093 nvme0n1 00:19:00.093 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:00.093 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.093 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:00.093 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.093 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.093 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: '' 2s 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: ]] 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YzE2YmQ1ZjExMzBlMTI2NGEzYTg1NDc3OTFiN2FhODK9t27Z: 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:00.365 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: 2s 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: ]] 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OTUyNDg2ZjJlMTQzZWJmNjNkMWRlZjU3YmZmYWJjMDg2ZGQ0YTdlMzNlM2ZlODM4XO7puQ==: 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:02.891 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.789 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:04.790 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:04.790 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:05.355 nvme0n1 00:19:05.355 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:05.355 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.355 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.355 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.355 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:05.355 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:05.612 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:05.612 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:05.612 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.870 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.870 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:05.870 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.870 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.870 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.870 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:05.870 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:06.127 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:06.127 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:06.127 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:06.385 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:06.950 request: 00:19:06.951 { 00:19:06.951 "name": "nvme0", 00:19:06.951 "dhchap_key": "key1", 00:19:06.951 "dhchap_ctrlr_key": "key3", 00:19:06.951 "method": "bdev_nvme_set_keys", 00:19:06.951 "req_id": 1 00:19:06.951 } 00:19:06.951 Got JSON-RPC error response 00:19:06.951 response: 00:19:06.951 { 00:19:06.951 "code": -13, 00:19:06.951 "message": "Permission denied" 00:19:06.951 } 00:19:06.951 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:06.951 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.951 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.951 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.951 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:06.951 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:06.951 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.951 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:06.951 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:08.323 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:08.323 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:08.323 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.323 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:08.323 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:08.323 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.323 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.323 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:08.323 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:08.323 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:08.890 nvme0n1 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:08.890 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:09.457 request: 00:19:09.457 { 00:19:09.457 "name": "nvme0", 00:19:09.457 "dhchap_key": "key2", 00:19:09.457 "dhchap_ctrlr_key": "key0", 00:19:09.457 "method": "bdev_nvme_set_keys", 00:19:09.457 "req_id": 1 00:19:09.457 } 00:19:09.457 Got JSON-RPC error response 00:19:09.457 response: 00:19:09.457 { 00:19:09.457 "code": -13, 00:19:09.457 "message": "Permission denied" 00:19:09.457 } 00:19:09.457 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:09.457 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.457 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.457 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.457 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:09.457 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:09.457 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.715 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:09.715 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:10.648 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:10.648 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:10.648 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1172541 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1172541 ']' 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1172541 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1172541 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1172541' 00:19:10.906 killing process with pid 1172541 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1172541 00:19:10.906 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1172541 00:19:11.164 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:11.164 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.164 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:11.164 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.164 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:11.164 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.164 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.164 rmmod nvme_tcp 00:19:11.164 rmmod nvme_fabrics 00:19:11.164 rmmod nvme_keyring 00:19:11.164 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.164 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:11.164 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:11.164 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1194230 ']' 00:19:11.164 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1194230 00:19:11.164 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1194230 ']' 00:19:11.164 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1194230 00:19:11.164 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:11.164 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.164 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194230 00:19:11.422 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.422 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.422 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194230' 00:19:11.422 killing process with pid 1194230 00:19:11.422 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1194230 00:19:11.422 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1194230 00:19:11.422 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:11.422 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:11.422 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:11.423 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:11.423 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:11.423 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:11.423 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:11.423 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.423 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:11.423 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.423 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.423 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.A3M /tmp/spdk.key-sha256.sMg /tmp/spdk.key-sha384.MXA /tmp/spdk.key-sha512.YrL /tmp/spdk.key-sha512.UQF /tmp/spdk.key-sha384.JcF /tmp/spdk.key-sha256.CzX '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:13.957 00:19:13.957 real 2m31.772s 00:19:13.957 user 5m49.901s 00:19:13.957 sys 0m24.419s 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.957 ************************************ 00:19:13.957 END TEST nvmf_auth_target 00:19:13.957 ************************************ 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.957 ************************************ 00:19:13.957 START TEST nvmf_bdevio_no_huge 00:19:13.957 ************************************ 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:13.957 * Looking for test storage... 00:19:13.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:13.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.957 --rc genhtml_branch_coverage=1 00:19:13.957 --rc genhtml_function_coverage=1 00:19:13.957 --rc genhtml_legend=1 00:19:13.957 --rc geninfo_all_blocks=1 00:19:13.957 --rc geninfo_unexecuted_blocks=1 00:19:13.957 00:19:13.957 ' 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:13.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.957 --rc genhtml_branch_coverage=1 00:19:13.957 --rc genhtml_function_coverage=1 00:19:13.957 --rc genhtml_legend=1 00:19:13.957 --rc geninfo_all_blocks=1 00:19:13.957 --rc geninfo_unexecuted_blocks=1 00:19:13.957 00:19:13.957 ' 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:13.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.957 --rc genhtml_branch_coverage=1 00:19:13.957 --rc genhtml_function_coverage=1 00:19:13.957 --rc genhtml_legend=1 00:19:13.957 --rc geninfo_all_blocks=1 00:19:13.957 --rc geninfo_unexecuted_blocks=1 00:19:13.957 00:19:13.957 ' 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:13.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.957 --rc genhtml_branch_coverage=1 00:19:13.957 --rc genhtml_function_coverage=1 00:19:13.957 --rc genhtml_legend=1 00:19:13.957 --rc geninfo_all_blocks=1 00:19:13.957 --rc geninfo_unexecuted_blocks=1 00:19:13.957 00:19:13.957 ' 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.957 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:20.523 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:20.523 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:20.523 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:20.524 Found net devices under 0000:af:00.0: cvl_0_0 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:20.524 Found net devices under 0000:af:00.1: cvl_0_1 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:20.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:19:20.524 00:19:20.524 --- 10.0.0.2 ping statistics --- 00:19:20.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.524 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:19:20.524 00:19:20.524 --- 10.0.0.1 ping statistics --- 00:19:20.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.524 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1200976 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1200976 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1200976 ']' 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.524 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.524 [2024-12-10 05:44:07.551052] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:19:20.524 [2024-12-10 05:44:07.551099] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:20.524 [2024-12-10 05:44:07.635412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.524 [2024-12-10 05:44:07.682208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.524 [2024-12-10 05:44:07.682244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.524 [2024-12-10 05:44:07.682251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.524 [2024-12-10 05:44:07.682257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.524 [2024-12-10 05:44:07.682262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.524 [2024-12-10 05:44:07.683394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:20.524 [2024-12-10 05:44:07.683506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:20.524 [2024-12-10 05:44:07.683613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.524 [2024-12-10 05:44:07.683614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:20.524 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.524 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:20.524 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.524 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.524 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.782 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.782 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.783 [2024-12-10 05:44:08.433119] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.783 Malloc0 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.783 [2024-12-10 05:44:08.477436] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:20.783 { 00:19:20.783 "params": { 00:19:20.783 "name": "Nvme$subsystem", 00:19:20.783 "trtype": "$TEST_TRANSPORT", 00:19:20.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.783 "adrfam": "ipv4", 00:19:20.783 "trsvcid": "$NVMF_PORT", 00:19:20.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.783 "hdgst": ${hdgst:-false}, 00:19:20.783 "ddgst": ${ddgst:-false} 00:19:20.783 }, 00:19:20.783 "method": "bdev_nvme_attach_controller" 00:19:20.783 } 00:19:20.783 EOF 00:19:20.783 )") 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:20.783 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:20.783 "params": { 00:19:20.783 "name": "Nvme1", 00:19:20.783 "trtype": "tcp", 00:19:20.783 "traddr": "10.0.0.2", 00:19:20.783 "adrfam": "ipv4", 00:19:20.783 "trsvcid": "4420", 00:19:20.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.783 "hdgst": false, 00:19:20.783 "ddgst": false 00:19:20.783 }, 00:19:20.783 "method": "bdev_nvme_attach_controller" 00:19:20.783 }' 00:19:20.783 [2024-12-10 05:44:08.527011] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:19:20.783 [2024-12-10 05:44:08.527054] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1201207 ] 00:19:20.783 [2024-12-10 05:44:08.602875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:20.783 [2024-12-10 05:44:08.650649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.783 [2024-12-10 05:44:08.650755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.783 [2024-12-10 05:44:08.650755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.041 I/O targets: 00:19:21.041 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:21.041 00:19:21.041 00:19:21.041 CUnit - A unit testing framework for C - Version 2.1-3 00:19:21.041 http://cunit.sourceforge.net/ 00:19:21.041 00:19:21.041 00:19:21.041 Suite: bdevio tests on: Nvme1n1 00:19:21.041 Test: blockdev write read block ...passed 00:19:21.299 Test: blockdev write zeroes read block ...passed 00:19:21.299 Test: blockdev write zeroes read no split ...passed 00:19:21.299 Test: blockdev write zeroes read split ...passed 00:19:21.299 Test: blockdev write zeroes read split partial ...passed 00:19:21.299 Test: blockdev reset ...[2024-12-10 05:44:08.979195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:21.299 [2024-12-10 05:44:08.979262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x659d30 (9): Bad file descriptor 00:19:21.299 [2024-12-10 05:44:09.115625] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:21.299 passed 00:19:21.299 Test: blockdev write read 8 blocks ...passed 00:19:21.299 Test: blockdev write read size > 128k ...passed 00:19:21.299 Test: blockdev write read invalid size ...passed 00:19:21.299 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:21.299 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:21.299 Test: blockdev write read max offset ...passed 00:19:21.557 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:21.557 Test: blockdev writev readv 8 blocks ...passed 00:19:21.557 Test: blockdev writev readv 30 x 1block ...passed 00:19:21.557 Test: blockdev writev readv block ...passed 00:19:21.557 Test: blockdev writev readv size > 128k ...passed 00:19:21.557 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:21.557 Test: blockdev comparev and writev ...[2024-12-10 05:44:09.325917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.557 [2024-12-10 05:44:09.325944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.557 [2024-12-10 05:44:09.325958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.557 [2024-12-10 05:44:09.325965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.557 [2024-12-10 05:44:09.326214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.557 [2024-12-10 05:44:09.326226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.557 [2024-12-10 05:44:09.326237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.557 [2024-12-10 05:44:09.326244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.557 [2024-12-10 05:44:09.326464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.557 [2024-12-10 05:44:09.326474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.557 [2024-12-10 05:44:09.326486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.557 [2024-12-10 05:44:09.326492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.557 [2024-12-10 05:44:09.326731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.557 [2024-12-10 05:44:09.326741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.557 [2024-12-10 05:44:09.326754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.557 [2024-12-10 05:44:09.326760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.557 passed 00:19:21.557 Test: blockdev nvme passthru rw ...passed 00:19:21.557 Test: blockdev nvme passthru vendor specific ...[2024-12-10 05:44:09.408530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.557 [2024-12-10 05:44:09.408548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.557 [2024-12-10 05:44:09.408652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.557 [2024-12-10 05:44:09.408661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.557 [2024-12-10 05:44:09.408772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.557 [2024-12-10 05:44:09.408782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.557 [2024-12-10 05:44:09.408881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.557 [2024-12-10 05:44:09.408891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.557 passed 00:19:21.557 Test: blockdev nvme admin passthru ...passed 00:19:21.815 Test: blockdev copy ...passed 00:19:21.815 00:19:21.815 Run Summary: Type Total Ran Passed Failed Inactive 00:19:21.815 suites 1 1 n/a 0 0 00:19:21.815 tests 23 23 23 0 0 00:19:21.815 asserts 152 152 152 0 n/a 00:19:21.815 00:19:21.815 Elapsed time = 1.216 seconds 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:22.074 rmmod nvme_tcp 00:19:22.074 rmmod nvme_fabrics 00:19:22.074 rmmod nvme_keyring 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1200976 ']' 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1200976 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1200976 ']' 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1200976 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1200976 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1200976' 00:19:22.074 killing process with pid 1200976 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1200976 00:19:22.074 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1200976 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.333 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:24.868 00:19:24.868 real 0m10.818s 00:19:24.868 user 0m13.583s 00:19:24.868 sys 0m5.344s 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.868 ************************************ 00:19:24.868 END TEST nvmf_bdevio_no_huge 00:19:24.868 ************************************ 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:24.868 ************************************ 00:19:24.868 START TEST nvmf_tls 00:19:24.868 ************************************ 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:24.868 * Looking for test storage... 00:19:24.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.868 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.869 --rc genhtml_branch_coverage=1 00:19:24.869 --rc genhtml_function_coverage=1 00:19:24.869 --rc genhtml_legend=1 00:19:24.869 --rc geninfo_all_blocks=1 00:19:24.869 --rc geninfo_unexecuted_blocks=1 00:19:24.869 00:19:24.869 ' 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.869 --rc genhtml_branch_coverage=1 00:19:24.869 --rc genhtml_function_coverage=1 00:19:24.869 --rc genhtml_legend=1 00:19:24.869 --rc geninfo_all_blocks=1 00:19:24.869 --rc geninfo_unexecuted_blocks=1 00:19:24.869 00:19:24.869 ' 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.869 --rc genhtml_branch_coverage=1 00:19:24.869 --rc genhtml_function_coverage=1 00:19:24.869 --rc genhtml_legend=1 00:19:24.869 --rc geninfo_all_blocks=1 00:19:24.869 --rc geninfo_unexecuted_blocks=1 00:19:24.869 00:19:24.869 ' 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.869 --rc genhtml_branch_coverage=1 00:19:24.869 --rc genhtml_function_coverage=1 00:19:24.869 --rc genhtml_legend=1 00:19:24.869 --rc geninfo_all_blocks=1 00:19:24.869 --rc geninfo_unexecuted_blocks=1 00:19:24.869 00:19:24.869 ' 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:24.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:24.869 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.440 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:31.441 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:31.441 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:31.441 Found net devices under 0000:af:00.0: cvl_0_0 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:31.441 Found net devices under 0000:af:00.1: cvl_0_1 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:31.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:19:31.441 00:19:31.441 --- 10.0.0.2 ping statistics --- 00:19:31.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.441 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:19:31.441 00:19:31.441 --- 10.0.0.1 ping statistics --- 00:19:31.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.441 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1204907 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1204907 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:31.441 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1204907 ']' 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.442 [2024-12-10 05:44:18.471669] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:19:31.442 [2024-12-10 05:44:18.471714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.442 [2024-12-10 05:44:18.548914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.442 [2024-12-10 05:44:18.588017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.442 [2024-12-10 05:44:18.588052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.442 [2024-12-10 05:44:18.588060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.442 [2024-12-10 05:44:18.588066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.442 [2024-12-10 05:44:18.588073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.442 [2024-12-10 05:44:18.588527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:31.442 true 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:31.442 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:31.442 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:31.442 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:31.442 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:31.442 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:31.442 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:31.700 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:31.700 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:31.700 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:31.959 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:31.959 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:31.959 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:31.959 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:31.959 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:31.959 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:32.217 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:32.217 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:32.217 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:32.475 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.475 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:32.733 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:32.733 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:32.733 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:32.991 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.991 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:32.991 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:32.991 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:32.991 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:32.991 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:32.991 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:32.991 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:32.991 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:32.991 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:32.991 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.JMTTA5jO7P 00:19:33.249 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:33.250 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.ezyLvz7kA1 00:19:33.250 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:33.250 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:33.250 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.JMTTA5jO7P 00:19:33.250 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.ezyLvz7kA1 00:19:33.250 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:33.508 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:33.767 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.JMTTA5jO7P 00:19:33.767 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JMTTA5jO7P 00:19:33.767 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.767 [2024-12-10 05:44:21.597891] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.767 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:34.025 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:34.284 [2024-12-10 05:44:21.994891] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.284 [2024-12-10 05:44:21.995084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.284 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:34.541 malloc0 00:19:34.541 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:34.541 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JMTTA5jO7P 00:19:34.799 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:35.057 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.JMTTA5jO7P 00:19:47.247 Initializing NVMe Controllers 00:19:47.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:47.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:47.247 Initialization complete. Launching workers. 00:19:47.247 ======================================================== 00:19:47.247 Latency(us) 00:19:47.247 Device Information : IOPS MiB/s Average min max 00:19:47.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16900.62 66.02 3786.91 805.18 5891.92 00:19:47.247 ======================================================== 00:19:47.247 Total : 16900.62 66.02 3786.91 805.18 5891.92 00:19:47.247 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JMTTA5jO7P 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JMTTA5jO7P 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1207401 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1207401 /var/tmp/bdevperf.sock 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1207401 ']' 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.247 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.247 [2024-12-10 05:44:32.974126] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:19:47.247 [2024-12-10 05:44:32.974178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207401 ] 00:19:47.247 [2024-12-10 05:44:33.046430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.247 [2024-12-10 05:44:33.084799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.247 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.247 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:47.247 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JMTTA5jO7P 00:19:47.247 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:47.247 [2024-12-10 05:44:33.541191] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.247 TLSTESTn1 00:19:47.247 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:47.247 Running I/O for 10 seconds... 00:19:48.257 5446.00 IOPS, 21.27 MiB/s [2024-12-10T04:44:37.085Z] 5469.50 IOPS, 21.37 MiB/s [2024-12-10T04:44:38.017Z] 5510.00 IOPS, 21.52 MiB/s [2024-12-10T04:44:38.948Z] 5554.50 IOPS, 21.70 MiB/s [2024-12-10T04:44:39.880Z] 5564.40 IOPS, 21.74 MiB/s [2024-12-10T04:44:40.813Z] 5592.67 IOPS, 21.85 MiB/s [2024-12-10T04:44:41.745Z] 5565.14 IOPS, 21.74 MiB/s [2024-12-10T04:44:43.117Z] 5554.88 IOPS, 21.70 MiB/s [2024-12-10T04:44:44.050Z] 5530.44 IOPS, 21.60 MiB/s [2024-12-10T04:44:44.050Z] 5529.80 IOPS, 21.60 MiB/s 00:19:56.154 Latency(us) 00:19:56.154 [2024-12-10T04:44:44.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.154 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:56.154 Verification LBA range: start 0x0 length 0x2000 00:19:56.154 TLSTESTn1 : 10.01 5535.69 21.62 0.00 0.00 23089.68 4837.18 25215.76 00:19:56.154 [2024-12-10T04:44:44.050Z] =================================================================================================================== 00:19:56.154 [2024-12-10T04:44:44.050Z] Total : 5535.69 21.62 0.00 0.00 23089.68 4837.18 25215.76 00:19:56.154 { 00:19:56.154 "results": [ 00:19:56.154 { 00:19:56.154 "job": "TLSTESTn1", 00:19:56.154 "core_mask": "0x4", 00:19:56.154 "workload": "verify", 00:19:56.154 "status": "finished", 00:19:56.154 "verify_range": { 00:19:56.154 "start": 0, 00:19:56.154 "length": 8192 00:19:56.154 }, 00:19:56.154 "queue_depth": 128, 00:19:56.154 "io_size": 4096, 00:19:56.154 "runtime": 10.012113, 00:19:56.154 "iops": 5535.6946131151335, 00:19:56.154 "mibps": 21.62380708248099, 00:19:56.154 "io_failed": 0, 00:19:56.154 "io_timeout": 0, 00:19:56.154 "avg_latency_us": 23089.67830199054, 00:19:56.154 "min_latency_us": 4837.1809523809525, 00:19:56.154 "max_latency_us": 25215.75619047619 00:19:56.154 } 00:19:56.154 ], 00:19:56.154 "core_count": 1 00:19:56.154 } 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1207401 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1207401 ']' 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1207401 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1207401 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1207401' 00:19:56.154 killing process with pid 1207401 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1207401 00:19:56.154 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.154 00:19:56.154 Latency(us) 00:19:56.154 [2024-12-10T04:44:44.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.154 [2024-12-10T04:44:44.050Z] =================================================================================================================== 00:19:56.154 [2024-12-10T04:44:44.050Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1207401 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ezyLvz7kA1 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ezyLvz7kA1 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ezyLvz7kA1 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ezyLvz7kA1 00:19:56.154 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.155 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1209185 00:19:56.155 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.155 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.155 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1209185 /var/tmp/bdevperf.sock 00:19:56.155 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1209185 ']' 00:19:56.155 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.155 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.155 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.155 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.155 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.155 [2024-12-10 05:44:44.040137] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:19:56.155 [2024-12-10 05:44:44.040197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1209185 ] 00:19:56.412 [2024-12-10 05:44:44.113732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.412 [2024-12-10 05:44:44.150206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.412 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.412 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:56.412 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ezyLvz7kA1 00:19:56.669 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:56.928 [2024-12-10 05:44:44.606049] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.928 [2024-12-10 05:44:44.614340] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:56.928 [2024-12-10 05:44:44.614392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246f410 (107): Transport endpoint is not connected 00:19:56.928 [2024-12-10 05:44:44.615386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246f410 (9): Bad file descriptor 00:19:56.928 [2024-12-10 05:44:44.616388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:56.928 [2024-12-10 05:44:44.616398] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:56.928 [2024-12-10 05:44:44.616406] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:56.928 [2024-12-10 05:44:44.616414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:56.928 request: 00:19:56.928 { 00:19:56.928 "name": "TLSTEST", 00:19:56.928 "trtype": "tcp", 00:19:56.928 "traddr": "10.0.0.2", 00:19:56.928 "adrfam": "ipv4", 00:19:56.928 "trsvcid": "4420", 00:19:56.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.928 "prchk_reftag": false, 00:19:56.928 "prchk_guard": false, 00:19:56.928 "hdgst": false, 00:19:56.928 "ddgst": false, 00:19:56.928 "psk": "key0", 00:19:56.928 "allow_unrecognized_csi": false, 00:19:56.928 "method": "bdev_nvme_attach_controller", 00:19:56.928 "req_id": 1 00:19:56.928 } 00:19:56.928 Got JSON-RPC error response 00:19:56.928 response: 00:19:56.928 { 00:19:56.928 "code": -5, 00:19:56.928 "message": "Input/output error" 00:19:56.928 } 00:19:56.928 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1209185 00:19:56.928 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1209185 ']' 00:19:56.928 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1209185 00:19:56.928 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:56.928 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.928 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1209185 00:19:56.928 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:56.928 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:56.928 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1209185' 00:19:56.928 killing process with pid 1209185 00:19:56.928 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1209185 00:19:56.928 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.928 00:19:56.928 Latency(us) 00:19:56.928 [2024-12-10T04:44:44.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.928 [2024-12-10T04:44:44.824Z] =================================================================================================================== 00:19:56.928 [2024-12-10T04:44:44.824Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.928 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1209185 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JMTTA5jO7P 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JMTTA5jO7P 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.JMTTA5jO7P 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JMTTA5jO7P 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1209209 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1209209 /var/tmp/bdevperf.sock 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1209209 ']' 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.186 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.186 [2024-12-10 05:44:44.896989] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:19:57.186 [2024-12-10 05:44:44.897040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1209209 ] 00:19:57.186 [2024-12-10 05:44:44.968389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.186 [2024-12-10 05:44:45.004291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.443 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.443 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:57.443 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JMTTA5jO7P 00:19:57.443 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:57.700 [2024-12-10 05:44:45.476580] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.700 [2024-12-10 05:44:45.486445] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:57.700 [2024-12-10 05:44:45.486466] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:57.700 [2024-12-10 05:44:45.486488] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:57.700 [2024-12-10 05:44:45.486818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196f410 (107): Transport endpoint is not connected 00:19:57.700 [2024-12-10 05:44:45.487811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196f410 (9): Bad file descriptor 00:19:57.700 [2024-12-10 05:44:45.488813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:57.700 [2024-12-10 05:44:45.488824] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:57.700 [2024-12-10 05:44:45.488831] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:57.700 [2024-12-10 05:44:45.488839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:57.700 request: 00:19:57.700 { 00:19:57.700 "name": "TLSTEST", 00:19:57.700 "trtype": "tcp", 00:19:57.700 "traddr": "10.0.0.2", 00:19:57.700 "adrfam": "ipv4", 00:19:57.700 "trsvcid": "4420", 00:19:57.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.700 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:57.700 "prchk_reftag": false, 00:19:57.700 "prchk_guard": false, 00:19:57.700 "hdgst": false, 00:19:57.700 "ddgst": false, 00:19:57.700 "psk": "key0", 00:19:57.700 "allow_unrecognized_csi": false, 00:19:57.701 "method": "bdev_nvme_attach_controller", 00:19:57.701 "req_id": 1 00:19:57.701 } 00:19:57.701 Got JSON-RPC error response 00:19:57.701 response: 00:19:57.701 { 00:19:57.701 "code": -5, 00:19:57.701 "message": "Input/output error" 00:19:57.701 } 00:19:57.701 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1209209 00:19:57.701 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1209209 ']' 00:19:57.701 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1209209 00:19:57.701 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:57.701 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.701 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1209209 00:19:57.701 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:57.701 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:57.701 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1209209' 00:19:57.701 killing process with pid 1209209 00:19:57.701 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1209209 00:19:57.701 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.701 00:19:57.701 Latency(us) 00:19:57.701 [2024-12-10T04:44:45.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.701 [2024-12-10T04:44:45.597Z] =================================================================================================================== 00:19:57.701 [2024-12-10T04:44:45.597Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:57.701 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1209209 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JMTTA5jO7P 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JMTTA5jO7P 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.JMTTA5jO7P 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JMTTA5jO7P 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1209436 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1209436 /var/tmp/bdevperf.sock 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1209436 ']' 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.959 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.959 [2024-12-10 05:44:45.770576] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:19:57.959 [2024-12-10 05:44:45.770621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1209436 ] 00:19:57.959 [2024-12-10 05:44:45.843419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.216 [2024-12-10 05:44:45.880066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.216 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.216 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:58.216 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JMTTA5jO7P 00:19:58.474 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:58.474 [2024-12-10 05:44:46.344062] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.474 [2024-12-10 05:44:46.348645] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:58.474 [2024-12-10 05:44:46.348666] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:58.474 [2024-12-10 05:44:46.348689] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:58.474 [2024-12-10 05:44:46.349348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfb410 (107): Transport endpoint is not connected 00:19:58.474 [2024-12-10 05:44:46.350338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfb410 (9): Bad file descriptor 00:19:58.474 [2024-12-10 05:44:46.351340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:58.474 [2024-12-10 05:44:46.351350] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:58.474 [2024-12-10 05:44:46.351357] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:58.474 [2024-12-10 05:44:46.351366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:58.474 request: 00:19:58.474 { 00:19:58.474 "name": "TLSTEST", 00:19:58.474 "trtype": "tcp", 00:19:58.474 "traddr": "10.0.0.2", 00:19:58.474 "adrfam": "ipv4", 00:19:58.474 "trsvcid": "4420", 00:19:58.474 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:58.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.474 "prchk_reftag": false, 00:19:58.474 "prchk_guard": false, 00:19:58.474 "hdgst": false, 00:19:58.474 "ddgst": false, 00:19:58.474 "psk": "key0", 00:19:58.474 "allow_unrecognized_csi": false, 00:19:58.474 "method": "bdev_nvme_attach_controller", 00:19:58.474 "req_id": 1 00:19:58.474 } 00:19:58.474 Got JSON-RPC error response 00:19:58.474 response: 00:19:58.474 { 00:19:58.474 "code": -5, 00:19:58.474 "message": "Input/output error" 00:19:58.474 } 00:19:58.731 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1209436 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1209436 ']' 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1209436 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1209436 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1209436' 00:19:58.732 killing process with pid 1209436 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1209436 00:19:58.732 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.732 00:19:58.732 Latency(us) 00:19:58.732 [2024-12-10T04:44:46.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.732 [2024-12-10T04:44:46.628Z] =================================================================================================================== 00:19:58.732 [2024-12-10T04:44:46.628Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1209436 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1209575 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1209575 /var/tmp/bdevperf.sock 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1209575 ']' 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.732 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.990 [2024-12-10 05:44:46.630994] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:19:58.990 [2024-12-10 05:44:46.631042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1209575 ] 00:19:58.990 [2024-12-10 05:44:46.703264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.990 [2024-12-10 05:44:46.741883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.990 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.990 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:58.990 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:59.247 [2024-12-10 05:44:47.009340] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:59.247 [2024-12-10 05:44:47.009373] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:59.247 request: 00:19:59.247 { 00:19:59.247 "name": "key0", 00:19:59.247 "path": "", 00:19:59.247 "method": "keyring_file_add_key", 00:19:59.247 "req_id": 1 00:19:59.247 } 00:19:59.247 Got JSON-RPC error response 00:19:59.247 response: 00:19:59.247 { 00:19:59.247 "code": -1, 00:19:59.247 "message": "Operation not permitted" 00:19:59.247 } 00:19:59.247 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.505 [2024-12-10 05:44:47.205926] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.505 [2024-12-10 05:44:47.205959] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:59.505 request: 00:19:59.505 { 00:19:59.505 "name": "TLSTEST", 00:19:59.505 "trtype": "tcp", 00:19:59.505 "traddr": "10.0.0.2", 00:19:59.505 "adrfam": "ipv4", 00:19:59.505 "trsvcid": "4420", 00:19:59.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.505 "prchk_reftag": false, 00:19:59.505 "prchk_guard": false, 00:19:59.505 "hdgst": false, 00:19:59.505 "ddgst": false, 00:19:59.505 "psk": "key0", 00:19:59.505 "allow_unrecognized_csi": false, 00:19:59.505 "method": "bdev_nvme_attach_controller", 00:19:59.505 "req_id": 1 00:19:59.505 } 00:19:59.505 Got JSON-RPC error response 00:19:59.505 response: 00:19:59.505 { 00:19:59.505 "code": -126, 00:19:59.505 "message": "Required key not available" 00:19:59.505 } 00:19:59.505 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1209575 00:19:59.505 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1209575 ']' 00:19:59.505 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1209575 00:19:59.505 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.505 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.505 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1209575 00:19:59.505 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:59.505 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:59.505 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1209575' 00:19:59.505 killing process with pid 1209575 00:19:59.505 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1209575 00:19:59.505 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.505 00:19:59.505 Latency(us) 00:19:59.505 [2024-12-10T04:44:47.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.505 [2024-12-10T04:44:47.401Z] =================================================================================================================== 00:19:59.505 [2024-12-10T04:44:47.401Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.505 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1209575 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1204907 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1204907 ']' 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1204907 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1204907 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1204907' 00:19:59.767 killing process with pid 1204907 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1204907 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1204907 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:59.767 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.nl7cTjfzQg 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.nl7cTjfzQg 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1209695 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1209695 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1209695 ']' 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.026 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.026 [2024-12-10 05:44:47.743079] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:00.026 [2024-12-10 05:44:47.743125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.026 [2024-12-10 05:44:47.821671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.026 [2024-12-10 05:44:47.862763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.026 [2024-12-10 05:44:47.862798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.026 [2024-12-10 05:44:47.862805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.026 [2024-12-10 05:44:47.862811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.026 [2024-12-10 05:44:47.862816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.026 [2024-12-10 05:44:47.863324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.285 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.285 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:00.285 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.285 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.285 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.285 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.285 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.nl7cTjfzQg 00:20:00.285 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nl7cTjfzQg 00:20:00.285 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.543 [2024-12-10 05:44:48.180286] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.543 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:00.543 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:00.801 [2024-12-10 05:44:48.577301] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.801 [2024-12-10 05:44:48.577499] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.801 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:01.059 malloc0 00:20:01.059 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.317 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nl7cTjfzQg 00:20:01.317 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nl7cTjfzQg 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nl7cTjfzQg 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1210103 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1210103 /var/tmp/bdevperf.sock 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1210103 ']' 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.575 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.575 [2024-12-10 05:44:49.460361] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:01.575 [2024-12-10 05:44:49.460410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1210103 ] 00:20:01.834 [2024-12-10 05:44:49.535019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.834 [2024-12-10 05:44:49.578137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.834 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.834 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.834 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nl7cTjfzQg 00:20:02.092 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.349 [2024-12-10 05:44:50.038616] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.349 TLSTESTn1 00:20:02.349 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:02.349 Running I/O for 10 seconds... 00:20:04.656 5327.00 IOPS, 20.81 MiB/s [2024-12-10T04:44:53.487Z] 5474.50 IOPS, 21.38 MiB/s [2024-12-10T04:44:54.421Z] 5491.67 IOPS, 21.45 MiB/s [2024-12-10T04:44:55.354Z] 5526.25 IOPS, 21.59 MiB/s [2024-12-10T04:44:56.289Z] 5547.20 IOPS, 21.67 MiB/s [2024-12-10T04:44:57.663Z] 5569.17 IOPS, 21.75 MiB/s [2024-12-10T04:44:58.598Z] 5584.14 IOPS, 21.81 MiB/s [2024-12-10T04:44:59.532Z] 5569.38 IOPS, 21.76 MiB/s [2024-12-10T04:45:00.466Z] 5554.44 IOPS, 21.70 MiB/s [2024-12-10T04:45:00.466Z] 5561.60 IOPS, 21.73 MiB/s 00:20:12.570 Latency(us) 00:20:12.570 [2024-12-10T04:45:00.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.570 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.570 Verification LBA range: start 0x0 length 0x2000 00:20:12.570 TLSTESTn1 : 10.01 5566.41 21.74 0.00 0.00 22961.98 4743.56 30458.64 00:20:12.570 [2024-12-10T04:45:00.466Z] =================================================================================================================== 00:20:12.570 [2024-12-10T04:45:00.466Z] Total : 5566.41 21.74 0.00 0.00 22961.98 4743.56 30458.64 00:20:12.570 { 00:20:12.570 "results": [ 00:20:12.570 { 00:20:12.570 "job": "TLSTESTn1", 00:20:12.570 "core_mask": "0x4", 00:20:12.570 "workload": "verify", 00:20:12.570 "status": "finished", 00:20:12.570 "verify_range": { 00:20:12.570 "start": 0, 00:20:12.570 "length": 8192 00:20:12.570 }, 00:20:12.570 "queue_depth": 128, 00:20:12.570 "io_size": 4096, 00:20:12.570 "runtime": 10.014359, 00:20:12.570 "iops": 5566.407195907396, 00:20:12.570 "mibps": 21.743778109013267, 00:20:12.570 "io_failed": 0, 00:20:12.570 "io_timeout": 0, 00:20:12.570 "avg_latency_us": 22961.984311409982, 00:20:12.570 "min_latency_us": 4743.558095238095, 00:20:12.570 "max_latency_us": 30458.63619047619 00:20:12.570 } 00:20:12.570 ], 00:20:12.570 "core_count": 1 00:20:12.570 } 00:20:12.570 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.570 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1210103 00:20:12.570 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1210103 ']' 00:20:12.570 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1210103 00:20:12.570 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:12.570 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.570 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1210103 00:20:12.570 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:12.570 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:12.570 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1210103' 00:20:12.570 killing process with pid 1210103 00:20:12.570 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1210103 00:20:12.570 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.570 00:20:12.570 Latency(us) 00:20:12.570 [2024-12-10T04:45:00.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.571 [2024-12-10T04:45:00.467Z] =================================================================================================================== 00:20:12.571 [2024-12-10T04:45:00.467Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.571 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1210103 00:20:12.829 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.nl7cTjfzQg 00:20:12.829 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nl7cTjfzQg 00:20:12.829 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:12.829 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nl7cTjfzQg 00:20:12.829 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:12.829 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.829 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:12.829 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.829 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nl7cTjfzQg 00:20:12.829 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nl7cTjfzQg 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1211887 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1211887 /var/tmp/bdevperf.sock 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1211887 ']' 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.830 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.830 [2024-12-10 05:45:00.546759] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:12.830 [2024-12-10 05:45:00.546808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211887 ] 00:20:12.830 [2024-12-10 05:45:00.619726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.830 [2024-12-10 05:45:00.660397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.087 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.087 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:13.087 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nl7cTjfzQg 00:20:13.087 [2024-12-10 05:45:00.932279] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nl7cTjfzQg': 0100666 00:20:13.087 [2024-12-10 05:45:00.932311] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:13.087 request: 00:20:13.087 { 00:20:13.087 "name": "key0", 00:20:13.087 "path": "/tmp/tmp.nl7cTjfzQg", 00:20:13.087 "method": "keyring_file_add_key", 00:20:13.087 "req_id": 1 00:20:13.087 } 00:20:13.087 Got JSON-RPC error response 00:20:13.087 response: 00:20:13.087 { 00:20:13.087 "code": -1, 00:20:13.087 "message": "Operation not permitted" 00:20:13.087 } 00:20:13.087 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:13.345 [2024-12-10 05:45:01.124860] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.345 [2024-12-10 05:45:01.124892] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:13.345 request: 00:20:13.345 { 00:20:13.345 "name": "TLSTEST", 00:20:13.345 "trtype": "tcp", 00:20:13.345 "traddr": "10.0.0.2", 00:20:13.345 "adrfam": "ipv4", 00:20:13.345 "trsvcid": "4420", 00:20:13.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.345 "prchk_reftag": false, 00:20:13.345 "prchk_guard": false, 00:20:13.345 "hdgst": false, 00:20:13.345 "ddgst": false, 00:20:13.345 "psk": "key0", 00:20:13.345 "allow_unrecognized_csi": false, 00:20:13.345 "method": "bdev_nvme_attach_controller", 00:20:13.345 "req_id": 1 00:20:13.345 } 00:20:13.345 Got JSON-RPC error response 00:20:13.345 response: 00:20:13.345 { 00:20:13.345 "code": -126, 00:20:13.345 "message": "Required key not available" 00:20:13.345 } 00:20:13.345 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1211887 00:20:13.345 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1211887 ']' 00:20:13.345 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1211887 00:20:13.345 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.345 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.345 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1211887 00:20:13.345 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:13.345 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:13.345 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1211887' 00:20:13.345 killing process with pid 1211887 00:20:13.345 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1211887 00:20:13.345 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.345 00:20:13.345 Latency(us) 00:20:13.345 [2024-12-10T04:45:01.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.345 [2024-12-10T04:45:01.241Z] =================================================================================================================== 00:20:13.345 [2024-12-10T04:45:01.241Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.345 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1211887 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1209695 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1209695 ']' 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1209695 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1209695 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1209695' 00:20:13.603 killing process with pid 1209695 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1209695 00:20:13.603 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1209695 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1212082 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1212082 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1212082 ']' 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.862 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.862 [2024-12-10 05:45:01.643800] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:13.862 [2024-12-10 05:45:01.643848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.862 [2024-12-10 05:45:01.721666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.121 [2024-12-10 05:45:01.762818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.121 [2024-12-10 05:45:01.762855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.121 [2024-12-10 05:45:01.762863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.121 [2024-12-10 05:45:01.762869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.121 [2024-12-10 05:45:01.762875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.121 [2024-12-10 05:45:01.763398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.nl7cTjfzQg 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.nl7cTjfzQg 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.nl7cTjfzQg 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nl7cTjfzQg 00:20:14.691 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.949 [2024-12-10 05:45:02.681267] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.949 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:15.207 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:15.207 [2024-12-10 05:45:03.082280] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.207 [2024-12-10 05:45:03.082497] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.466 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:15.466 malloc0 00:20:15.466 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.724 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nl7cTjfzQg 00:20:15.982 [2024-12-10 05:45:03.703884] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nl7cTjfzQg': 0100666 00:20:15.982 [2024-12-10 05:45:03.703912] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:15.982 request: 00:20:15.982 { 00:20:15.982 "name": "key0", 00:20:15.982 "path": "/tmp/tmp.nl7cTjfzQg", 00:20:15.982 "method": "keyring_file_add_key", 00:20:15.982 "req_id": 1 00:20:15.982 } 00:20:15.982 Got JSON-RPC error response 00:20:15.982 response: 00:20:15.982 { 00:20:15.982 "code": -1, 00:20:15.982 "message": "Operation not permitted" 00:20:15.982 } 00:20:15.982 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:16.241 [2024-12-10 05:45:03.900408] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:16.241 [2024-12-10 05:45:03.900438] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:16.241 request: 00:20:16.241 { 00:20:16.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.241 "host": "nqn.2016-06.io.spdk:host1", 00:20:16.241 "psk": "key0", 00:20:16.241 "method": "nvmf_subsystem_add_host", 00:20:16.241 "req_id": 1 00:20:16.241 } 00:20:16.241 Got JSON-RPC error response 00:20:16.241 response: 00:20:16.241 { 00:20:16.241 "code": -32603, 00:20:16.241 "message": "Internal error" 00:20:16.241 } 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1212082 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1212082 ']' 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1212082 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1212082 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1212082' 00:20:16.241 killing process with pid 1212082 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1212082 00:20:16.241 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1212082 00:20:16.499 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.nl7cTjfzQg 00:20:16.499 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:16.499 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:16.499 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.499 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.499 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1212568 00:20:16.500 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:16.500 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1212568 00:20:16.500 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1212568 ']' 00:20:16.500 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.500 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.500 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.500 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.500 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.500 [2024-12-10 05:45:04.203418] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:16.500 [2024-12-10 05:45:04.203462] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.500 [2024-12-10 05:45:04.279584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.500 [2024-12-10 05:45:04.318320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.500 [2024-12-10 05:45:04.318354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.500 [2024-12-10 05:45:04.318362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.500 [2024-12-10 05:45:04.318368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.500 [2024-12-10 05:45:04.318373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.500 [2024-12-10 05:45:04.318849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.758 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.758 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:16.758 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.758 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.758 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.758 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.758 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.nl7cTjfzQg 00:20:16.758 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nl7cTjfzQg 00:20:16.758 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:16.758 [2024-12-10 05:45:04.639196] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.016 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.016 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:17.274 [2024-12-10 05:45:05.052256] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.274 [2024-12-10 05:45:05.052459] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.274 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:17.532 malloc0 00:20:17.532 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:17.790 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nl7cTjfzQg 00:20:17.790 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:18.048 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1212908 00:20:18.048 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.048 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.048 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1212908 /var/tmp/bdevperf.sock 00:20:18.048 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1212908 ']' 00:20:18.048 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.048 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.048 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.048 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.048 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.048 [2024-12-10 05:45:05.918617] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:18.048 [2024-12-10 05:45:05.918668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212908 ] 00:20:18.306 [2024-12-10 05:45:05.994503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.306 [2024-12-10 05:45:06.036455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.306 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.306 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:18.306 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nl7cTjfzQg 00:20:18.565 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:18.836 [2024-12-10 05:45:06.522040] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.836 TLSTESTn1 00:20:18.836 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:19.095 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:19.095 "subsystems": [ 00:20:19.095 { 00:20:19.095 "subsystem": "keyring", 00:20:19.095 "config": [ 00:20:19.095 { 00:20:19.095 "method": "keyring_file_add_key", 00:20:19.095 "params": { 00:20:19.095 "name": "key0", 00:20:19.095 "path": "/tmp/tmp.nl7cTjfzQg" 00:20:19.095 } 00:20:19.095 } 00:20:19.095 ] 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "subsystem": "iobuf", 00:20:19.095 "config": [ 00:20:19.095 { 00:20:19.095 "method": "iobuf_set_options", 00:20:19.095 "params": { 00:20:19.095 "small_pool_count": 8192, 00:20:19.095 "large_pool_count": 1024, 00:20:19.095 "small_bufsize": 8192, 00:20:19.095 "large_bufsize": 135168, 00:20:19.095 "enable_numa": false 00:20:19.095 } 00:20:19.095 } 00:20:19.095 ] 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "subsystem": "sock", 00:20:19.095 "config": [ 00:20:19.095 { 00:20:19.095 "method": "sock_set_default_impl", 00:20:19.095 "params": { 00:20:19.095 "impl_name": "posix" 00:20:19.095 } 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "method": "sock_impl_set_options", 00:20:19.095 "params": { 00:20:19.095 "impl_name": "ssl", 00:20:19.095 "recv_buf_size": 4096, 00:20:19.095 "send_buf_size": 4096, 00:20:19.095 "enable_recv_pipe": true, 00:20:19.095 "enable_quickack": false, 00:20:19.095 "enable_placement_id": 0, 00:20:19.095 "enable_zerocopy_send_server": true, 00:20:19.095 "enable_zerocopy_send_client": false, 00:20:19.095 "zerocopy_threshold": 0, 00:20:19.095 "tls_version": 0, 00:20:19.095 "enable_ktls": false 00:20:19.095 } 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "method": "sock_impl_set_options", 00:20:19.095 "params": { 00:20:19.095 "impl_name": "posix", 00:20:19.095 "recv_buf_size": 2097152, 00:20:19.095 "send_buf_size": 2097152, 00:20:19.095 "enable_recv_pipe": true, 00:20:19.095 "enable_quickack": false, 00:20:19.095 "enable_placement_id": 0, 00:20:19.095 "enable_zerocopy_send_server": true, 00:20:19.095 "enable_zerocopy_send_client": false, 00:20:19.095 "zerocopy_threshold": 0, 00:20:19.095 "tls_version": 0, 00:20:19.095 "enable_ktls": false 00:20:19.095 } 00:20:19.095 } 00:20:19.095 ] 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "subsystem": "vmd", 00:20:19.095 "config": [] 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "subsystem": "accel", 00:20:19.095 "config": [ 00:20:19.095 { 00:20:19.095 "method": "accel_set_options", 00:20:19.095 "params": { 00:20:19.095 "small_cache_size": 128, 00:20:19.095 "large_cache_size": 16, 00:20:19.095 "task_count": 2048, 00:20:19.095 "sequence_count": 2048, 00:20:19.095 "buf_count": 2048 00:20:19.095 } 00:20:19.095 } 00:20:19.095 ] 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "subsystem": "bdev", 00:20:19.095 "config": [ 00:20:19.095 { 00:20:19.095 "method": "bdev_set_options", 00:20:19.095 "params": { 00:20:19.095 "bdev_io_pool_size": 65535, 00:20:19.095 "bdev_io_cache_size": 256, 00:20:19.095 "bdev_auto_examine": true, 00:20:19.095 "iobuf_small_cache_size": 128, 00:20:19.095 "iobuf_large_cache_size": 16 00:20:19.095 } 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "method": "bdev_raid_set_options", 00:20:19.095 "params": { 00:20:19.095 "process_window_size_kb": 1024, 00:20:19.095 "process_max_bandwidth_mb_sec": 0 00:20:19.095 } 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "method": "bdev_iscsi_set_options", 00:20:19.095 "params": { 00:20:19.095 "timeout_sec": 30 00:20:19.095 } 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "method": "bdev_nvme_set_options", 00:20:19.095 "params": { 00:20:19.095 "action_on_timeout": "none", 00:20:19.095 "timeout_us": 0, 00:20:19.095 "timeout_admin_us": 0, 00:20:19.095 "keep_alive_timeout_ms": 10000, 00:20:19.095 "arbitration_burst": 0, 00:20:19.095 "low_priority_weight": 0, 00:20:19.095 "medium_priority_weight": 0, 00:20:19.095 "high_priority_weight": 0, 00:20:19.095 "nvme_adminq_poll_period_us": 10000, 00:20:19.095 "nvme_ioq_poll_period_us": 0, 00:20:19.095 "io_queue_requests": 0, 00:20:19.095 "delay_cmd_submit": true, 00:20:19.095 "transport_retry_count": 4, 00:20:19.095 "bdev_retry_count": 3, 00:20:19.095 "transport_ack_timeout": 0, 00:20:19.095 "ctrlr_loss_timeout_sec": 0, 00:20:19.095 "reconnect_delay_sec": 0, 00:20:19.095 "fast_io_fail_timeout_sec": 0, 00:20:19.095 "disable_auto_failback": false, 00:20:19.095 "generate_uuids": false, 00:20:19.095 "transport_tos": 0, 00:20:19.095 "nvme_error_stat": false, 00:20:19.095 "rdma_srq_size": 0, 00:20:19.095 "io_path_stat": false, 00:20:19.095 "allow_accel_sequence": false, 00:20:19.095 "rdma_max_cq_size": 0, 00:20:19.095 "rdma_cm_event_timeout_ms": 0, 00:20:19.095 "dhchap_digests": [ 00:20:19.095 "sha256", 00:20:19.095 "sha384", 00:20:19.095 "sha512" 00:20:19.095 ], 00:20:19.095 "dhchap_dhgroups": [ 00:20:19.095 "null", 00:20:19.095 "ffdhe2048", 00:20:19.095 "ffdhe3072", 00:20:19.095 "ffdhe4096", 00:20:19.095 "ffdhe6144", 00:20:19.095 "ffdhe8192" 00:20:19.095 ] 00:20:19.095 } 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "method": "bdev_nvme_set_hotplug", 00:20:19.095 "params": { 00:20:19.095 "period_us": 100000, 00:20:19.095 "enable": false 00:20:19.095 } 00:20:19.095 }, 00:20:19.095 { 00:20:19.095 "method": "bdev_malloc_create", 00:20:19.095 "params": { 00:20:19.095 "name": "malloc0", 00:20:19.095 "num_blocks": 8192, 00:20:19.095 "block_size": 4096, 00:20:19.095 "physical_block_size": 4096, 00:20:19.095 "uuid": "3facc840-b736-4dde-b3b9-379a2021f192", 00:20:19.096 "optimal_io_boundary": 0, 00:20:19.096 "md_size": 0, 00:20:19.096 "dif_type": 0, 00:20:19.096 "dif_is_head_of_md": false, 00:20:19.096 "dif_pi_format": 0 00:20:19.096 } 00:20:19.096 }, 00:20:19.096 { 00:20:19.096 "method": "bdev_wait_for_examine" 00:20:19.096 } 00:20:19.096 ] 00:20:19.096 }, 00:20:19.096 { 00:20:19.096 "subsystem": "nbd", 00:20:19.096 "config": [] 00:20:19.096 }, 00:20:19.096 { 00:20:19.096 "subsystem": "scheduler", 00:20:19.096 "config": [ 00:20:19.096 { 00:20:19.096 "method": "framework_set_scheduler", 00:20:19.096 "params": { 00:20:19.096 "name": "static" 00:20:19.096 } 00:20:19.096 } 00:20:19.096 ] 00:20:19.096 }, 00:20:19.096 { 00:20:19.096 "subsystem": "nvmf", 00:20:19.096 "config": [ 00:20:19.096 { 00:20:19.096 "method": "nvmf_set_config", 00:20:19.096 "params": { 00:20:19.096 "discovery_filter": "match_any", 00:20:19.096 "admin_cmd_passthru": { 00:20:19.096 "identify_ctrlr": false 00:20:19.096 }, 00:20:19.096 "dhchap_digests": [ 00:20:19.096 "sha256", 00:20:19.096 "sha384", 00:20:19.096 "sha512" 00:20:19.096 ], 00:20:19.096 "dhchap_dhgroups": [ 00:20:19.096 "null", 00:20:19.096 "ffdhe2048", 00:20:19.096 "ffdhe3072", 00:20:19.096 "ffdhe4096", 00:20:19.096 "ffdhe6144", 00:20:19.096 "ffdhe8192" 00:20:19.096 ] 00:20:19.096 } 00:20:19.096 }, 00:20:19.096 { 00:20:19.096 "method": "nvmf_set_max_subsystems", 00:20:19.096 "params": { 00:20:19.096 "max_subsystems": 1024 00:20:19.096 } 00:20:19.096 }, 00:20:19.096 { 00:20:19.096 "method": "nvmf_set_crdt", 00:20:19.096 "params": { 00:20:19.096 "crdt1": 0, 00:20:19.096 "crdt2": 0, 00:20:19.096 "crdt3": 0 00:20:19.096 } 00:20:19.096 }, 00:20:19.096 { 00:20:19.096 "method": "nvmf_create_transport", 00:20:19.096 "params": { 00:20:19.096 "trtype": "TCP", 00:20:19.096 "max_queue_depth": 128, 00:20:19.096 "max_io_qpairs_per_ctrlr": 127, 00:20:19.096 "in_capsule_data_size": 4096, 00:20:19.096 "max_io_size": 131072, 00:20:19.096 "io_unit_size": 131072, 00:20:19.096 "max_aq_depth": 128, 00:20:19.096 "num_shared_buffers": 511, 00:20:19.096 "buf_cache_size": 4294967295, 00:20:19.096 "dif_insert_or_strip": false, 00:20:19.096 "zcopy": false, 00:20:19.096 "c2h_success": false, 00:20:19.096 "sock_priority": 0, 00:20:19.096 "abort_timeout_sec": 1, 00:20:19.096 "ack_timeout": 0, 00:20:19.096 "data_wr_pool_size": 0 00:20:19.096 } 00:20:19.096 }, 00:20:19.096 { 00:20:19.096 "method": "nvmf_create_subsystem", 00:20:19.096 "params": { 00:20:19.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.096 "allow_any_host": false, 00:20:19.096 "serial_number": "SPDK00000000000001", 00:20:19.096 "model_number": "SPDK bdev Controller", 00:20:19.096 "max_namespaces": 10, 00:20:19.096 "min_cntlid": 1, 00:20:19.096 "max_cntlid": 65519, 00:20:19.096 "ana_reporting": false 00:20:19.096 } 00:20:19.096 }, 00:20:19.096 { 00:20:19.096 "method": "nvmf_subsystem_add_host", 00:20:19.096 "params": { 00:20:19.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.096 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.096 "psk": "key0" 00:20:19.096 } 00:20:19.096 }, 00:20:19.096 { 00:20:19.096 "method": "nvmf_subsystem_add_ns", 00:20:19.096 "params": { 00:20:19.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.096 "namespace": { 00:20:19.096 "nsid": 1, 00:20:19.096 "bdev_name": "malloc0", 00:20:19.096 "nguid": "3FACC840B7364DDEB3B9379A2021F192", 00:20:19.096 "uuid": "3facc840-b736-4dde-b3b9-379a2021f192", 00:20:19.096 "no_auto_visible": false 00:20:19.096 } 00:20:19.096 } 00:20:19.096 }, 00:20:19.096 { 00:20:19.096 "method": "nvmf_subsystem_add_listener", 00:20:19.096 "params": { 00:20:19.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.096 "listen_address": { 00:20:19.096 "trtype": "TCP", 00:20:19.096 "adrfam": "IPv4", 00:20:19.096 "traddr": "10.0.0.2", 00:20:19.096 "trsvcid": "4420" 00:20:19.096 }, 00:20:19.096 "secure_channel": true 00:20:19.096 } 00:20:19.096 } 00:20:19.096 ] 00:20:19.096 } 00:20:19.096 ] 00:20:19.096 }' 00:20:19.096 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:19.355 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:19.355 "subsystems": [ 00:20:19.355 { 00:20:19.355 "subsystem": "keyring", 00:20:19.355 "config": [ 00:20:19.355 { 00:20:19.355 "method": "keyring_file_add_key", 00:20:19.355 "params": { 00:20:19.355 "name": "key0", 00:20:19.355 "path": "/tmp/tmp.nl7cTjfzQg" 00:20:19.355 } 00:20:19.355 } 00:20:19.355 ] 00:20:19.355 }, 00:20:19.355 { 00:20:19.355 "subsystem": "iobuf", 00:20:19.355 "config": [ 00:20:19.355 { 00:20:19.355 "method": "iobuf_set_options", 00:20:19.355 "params": { 00:20:19.355 "small_pool_count": 8192, 00:20:19.355 "large_pool_count": 1024, 00:20:19.355 "small_bufsize": 8192, 00:20:19.355 "large_bufsize": 135168, 00:20:19.355 "enable_numa": false 00:20:19.355 } 00:20:19.355 } 00:20:19.355 ] 00:20:19.355 }, 00:20:19.355 { 00:20:19.355 "subsystem": "sock", 00:20:19.355 "config": [ 00:20:19.355 { 00:20:19.355 "method": "sock_set_default_impl", 00:20:19.355 "params": { 00:20:19.355 "impl_name": "posix" 00:20:19.355 } 00:20:19.355 }, 00:20:19.355 { 00:20:19.355 "method": "sock_impl_set_options", 00:20:19.355 "params": { 00:20:19.355 "impl_name": "ssl", 00:20:19.355 "recv_buf_size": 4096, 00:20:19.355 "send_buf_size": 4096, 00:20:19.355 "enable_recv_pipe": true, 00:20:19.355 "enable_quickack": false, 00:20:19.355 "enable_placement_id": 0, 00:20:19.355 "enable_zerocopy_send_server": true, 00:20:19.355 "enable_zerocopy_send_client": false, 00:20:19.355 "zerocopy_threshold": 0, 00:20:19.355 "tls_version": 0, 00:20:19.355 "enable_ktls": false 00:20:19.355 } 00:20:19.355 }, 00:20:19.355 { 00:20:19.355 "method": "sock_impl_set_options", 00:20:19.355 "params": { 00:20:19.355 "impl_name": "posix", 00:20:19.355 "recv_buf_size": 2097152, 00:20:19.355 "send_buf_size": 2097152, 00:20:19.355 "enable_recv_pipe": true, 00:20:19.355 "enable_quickack": false, 00:20:19.355 "enable_placement_id": 0, 00:20:19.355 "enable_zerocopy_send_server": true, 00:20:19.355 "enable_zerocopy_send_client": false, 00:20:19.355 "zerocopy_threshold": 0, 00:20:19.355 "tls_version": 0, 00:20:19.355 "enable_ktls": false 00:20:19.355 } 00:20:19.355 } 00:20:19.355 ] 00:20:19.355 }, 00:20:19.355 { 00:20:19.355 "subsystem": "vmd", 00:20:19.355 "config": [] 00:20:19.355 }, 00:20:19.355 { 00:20:19.355 "subsystem": "accel", 00:20:19.355 "config": [ 00:20:19.355 { 00:20:19.355 "method": "accel_set_options", 00:20:19.355 "params": { 00:20:19.355 "small_cache_size": 128, 00:20:19.355 "large_cache_size": 16, 00:20:19.355 "task_count": 2048, 00:20:19.355 "sequence_count": 2048, 00:20:19.355 "buf_count": 2048 00:20:19.355 } 00:20:19.355 } 00:20:19.355 ] 00:20:19.355 }, 00:20:19.355 { 00:20:19.355 "subsystem": "bdev", 00:20:19.355 "config": [ 00:20:19.355 { 00:20:19.355 "method": "bdev_set_options", 00:20:19.355 "params": { 00:20:19.355 "bdev_io_pool_size": 65535, 00:20:19.355 "bdev_io_cache_size": 256, 00:20:19.355 "bdev_auto_examine": true, 00:20:19.355 "iobuf_small_cache_size": 128, 00:20:19.355 "iobuf_large_cache_size": 16 00:20:19.355 } 00:20:19.355 }, 00:20:19.355 { 00:20:19.355 "method": "bdev_raid_set_options", 00:20:19.355 "params": { 00:20:19.355 "process_window_size_kb": 1024, 00:20:19.355 "process_max_bandwidth_mb_sec": 0 00:20:19.355 } 00:20:19.355 }, 00:20:19.355 { 00:20:19.356 "method": "bdev_iscsi_set_options", 00:20:19.356 "params": { 00:20:19.356 "timeout_sec": 30 00:20:19.356 } 00:20:19.356 }, 00:20:19.356 { 00:20:19.356 "method": "bdev_nvme_set_options", 00:20:19.356 "params": { 00:20:19.356 "action_on_timeout": "none", 00:20:19.356 "timeout_us": 0, 00:20:19.356 "timeout_admin_us": 0, 00:20:19.356 "keep_alive_timeout_ms": 10000, 00:20:19.356 "arbitration_burst": 0, 00:20:19.356 "low_priority_weight": 0, 00:20:19.356 "medium_priority_weight": 0, 00:20:19.356 "high_priority_weight": 0, 00:20:19.356 "nvme_adminq_poll_period_us": 10000, 00:20:19.356 "nvme_ioq_poll_period_us": 0, 00:20:19.356 "io_queue_requests": 512, 00:20:19.356 "delay_cmd_submit": true, 00:20:19.356 "transport_retry_count": 4, 00:20:19.356 "bdev_retry_count": 3, 00:20:19.356 "transport_ack_timeout": 0, 00:20:19.356 "ctrlr_loss_timeout_sec": 0, 00:20:19.356 "reconnect_delay_sec": 0, 00:20:19.356 "fast_io_fail_timeout_sec": 0, 00:20:19.356 "disable_auto_failback": false, 00:20:19.356 "generate_uuids": false, 00:20:19.356 "transport_tos": 0, 00:20:19.356 "nvme_error_stat": false, 00:20:19.356 "rdma_srq_size": 0, 00:20:19.356 "io_path_stat": false, 00:20:19.356 "allow_accel_sequence": false, 00:20:19.356 "rdma_max_cq_size": 0, 00:20:19.356 "rdma_cm_event_timeout_ms": 0, 00:20:19.356 "dhchap_digests": [ 00:20:19.356 "sha256", 00:20:19.356 "sha384", 00:20:19.356 "sha512" 00:20:19.356 ], 00:20:19.356 "dhchap_dhgroups": [ 00:20:19.356 "null", 00:20:19.356 "ffdhe2048", 00:20:19.356 "ffdhe3072", 00:20:19.356 "ffdhe4096", 00:20:19.356 "ffdhe6144", 00:20:19.356 "ffdhe8192" 00:20:19.356 ] 00:20:19.356 } 00:20:19.356 }, 00:20:19.356 { 00:20:19.356 "method": "bdev_nvme_attach_controller", 00:20:19.356 "params": { 00:20:19.356 "name": "TLSTEST", 00:20:19.356 "trtype": "TCP", 00:20:19.356 "adrfam": "IPv4", 00:20:19.356 "traddr": "10.0.0.2", 00:20:19.356 "trsvcid": "4420", 00:20:19.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.356 "prchk_reftag": false, 00:20:19.356 "prchk_guard": false, 00:20:19.356 "ctrlr_loss_timeout_sec": 0, 00:20:19.356 "reconnect_delay_sec": 0, 00:20:19.356 "fast_io_fail_timeout_sec": 0, 00:20:19.356 "psk": "key0", 00:20:19.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.356 "hdgst": false, 00:20:19.356 "ddgst": false, 00:20:19.356 "multipath": "multipath" 00:20:19.356 } 00:20:19.356 }, 00:20:19.356 { 00:20:19.356 "method": "bdev_nvme_set_hotplug", 00:20:19.356 "params": { 00:20:19.356 "period_us": 100000, 00:20:19.356 "enable": false 00:20:19.356 } 00:20:19.356 }, 00:20:19.356 { 00:20:19.356 "method": "bdev_wait_for_examine" 00:20:19.356 } 00:20:19.356 ] 00:20:19.356 }, 00:20:19.356 { 00:20:19.356 "subsystem": "nbd", 00:20:19.356 "config": [] 00:20:19.356 } 00:20:19.356 ] 00:20:19.356 }' 00:20:19.356 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1212908 00:20:19.356 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1212908 ']' 00:20:19.356 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1212908 00:20:19.356 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:19.356 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.356 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1212908 00:20:19.356 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:19.356 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:19.356 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1212908' 00:20:19.356 killing process with pid 1212908 00:20:19.356 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1212908 00:20:19.356 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.356 00:20:19.356 Latency(us) 00:20:19.356 [2024-12-10T04:45:07.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.356 [2024-12-10T04:45:07.252Z] =================================================================================================================== 00:20:19.356 [2024-12-10T04:45:07.252Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:19.356 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1212908 00:20:19.615 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1212568 00:20:19.615 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1212568 ']' 00:20:19.615 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1212568 00:20:19.615 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:19.615 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.615 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1212568 00:20:19.615 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:19.615 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:19.615 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1212568' 00:20:19.615 killing process with pid 1212568 00:20:19.615 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1212568 00:20:19.615 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1212568 00:20:19.874 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:19.874 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.874 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.874 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:19.874 "subsystems": [ 00:20:19.874 { 00:20:19.874 "subsystem": "keyring", 00:20:19.874 "config": [ 00:20:19.874 { 00:20:19.874 "method": "keyring_file_add_key", 00:20:19.874 "params": { 00:20:19.874 "name": "key0", 00:20:19.874 "path": "/tmp/tmp.nl7cTjfzQg" 00:20:19.874 } 00:20:19.874 } 00:20:19.874 ] 00:20:19.874 }, 00:20:19.874 { 00:20:19.874 "subsystem": "iobuf", 00:20:19.874 "config": [ 00:20:19.874 { 00:20:19.874 "method": "iobuf_set_options", 00:20:19.874 "params": { 00:20:19.874 "small_pool_count": 8192, 00:20:19.874 "large_pool_count": 1024, 00:20:19.874 "small_bufsize": 8192, 00:20:19.874 "large_bufsize": 135168, 00:20:19.874 "enable_numa": false 00:20:19.874 } 00:20:19.874 } 00:20:19.874 ] 00:20:19.874 }, 00:20:19.874 { 00:20:19.874 "subsystem": "sock", 00:20:19.874 "config": [ 00:20:19.874 { 00:20:19.874 "method": "sock_set_default_impl", 00:20:19.874 "params": { 00:20:19.874 "impl_name": "posix" 00:20:19.874 } 00:20:19.874 }, 00:20:19.874 { 00:20:19.874 "method": "sock_impl_set_options", 00:20:19.874 "params": { 00:20:19.874 "impl_name": "ssl", 00:20:19.874 "recv_buf_size": 4096, 00:20:19.874 "send_buf_size": 4096, 00:20:19.874 "enable_recv_pipe": true, 00:20:19.874 "enable_quickack": false, 00:20:19.874 "enable_placement_id": 0, 00:20:19.874 "enable_zerocopy_send_server": true, 00:20:19.874 "enable_zerocopy_send_client": false, 00:20:19.874 "zerocopy_threshold": 0, 00:20:19.874 "tls_version": 0, 00:20:19.874 "enable_ktls": false 00:20:19.874 } 00:20:19.874 }, 00:20:19.874 { 00:20:19.874 "method": "sock_impl_set_options", 00:20:19.874 "params": { 00:20:19.874 "impl_name": "posix", 00:20:19.874 "recv_buf_size": 2097152, 00:20:19.874 "send_buf_size": 2097152, 00:20:19.874 "enable_recv_pipe": true, 00:20:19.874 "enable_quickack": false, 00:20:19.874 "enable_placement_id": 0, 00:20:19.874 "enable_zerocopy_send_server": true, 00:20:19.874 "enable_zerocopy_send_client": false, 00:20:19.874 "zerocopy_threshold": 0, 00:20:19.874 "tls_version": 0, 00:20:19.874 "enable_ktls": false 00:20:19.874 } 00:20:19.874 } 00:20:19.874 ] 00:20:19.874 }, 00:20:19.874 { 00:20:19.874 "subsystem": "vmd", 00:20:19.874 "config": [] 00:20:19.874 }, 00:20:19.874 { 00:20:19.874 "subsystem": "accel", 00:20:19.874 "config": [ 00:20:19.874 { 00:20:19.874 "method": "accel_set_options", 00:20:19.874 "params": { 00:20:19.874 "small_cache_size": 128, 00:20:19.874 "large_cache_size": 16, 00:20:19.874 "task_count": 2048, 00:20:19.874 "sequence_count": 2048, 00:20:19.874 "buf_count": 2048 00:20:19.874 } 00:20:19.874 } 00:20:19.874 ] 00:20:19.874 }, 00:20:19.874 { 00:20:19.874 "subsystem": "bdev", 00:20:19.874 "config": [ 00:20:19.874 { 00:20:19.874 "method": "bdev_set_options", 00:20:19.874 "params": { 00:20:19.874 "bdev_io_pool_size": 65535, 00:20:19.874 "bdev_io_cache_size": 256, 00:20:19.874 "bdev_auto_examine": true, 00:20:19.874 "iobuf_small_cache_size": 128, 00:20:19.874 "iobuf_large_cache_size": 16 00:20:19.874 } 00:20:19.874 }, 00:20:19.874 { 00:20:19.874 "method": "bdev_raid_set_options", 00:20:19.874 "params": { 00:20:19.874 "process_window_size_kb": 1024, 00:20:19.874 "process_max_bandwidth_mb_sec": 0 00:20:19.874 } 00:20:19.874 }, 00:20:19.874 { 00:20:19.874 "method": "bdev_iscsi_set_options", 00:20:19.874 "params": { 00:20:19.874 "timeout_sec": 30 00:20:19.874 } 00:20:19.874 }, 00:20:19.874 { 00:20:19.874 "method": "bdev_nvme_set_options", 00:20:19.874 "params": { 00:20:19.874 "action_on_timeout": "none", 00:20:19.874 "timeout_us": 0, 00:20:19.874 "timeout_admin_us": 0, 00:20:19.874 "keep_alive_timeout_ms": 10000, 00:20:19.874 "arbitration_burst": 0, 00:20:19.874 "low_priority_weight": 0, 00:20:19.874 "medium_priority_weight": 0, 00:20:19.874 "high_priority_weight": 0, 00:20:19.874 "nvme_adminq_poll_period_us": 10000, 00:20:19.874 "nvme_ioq_poll_period_us": 0, 00:20:19.874 "io_queue_requests": 0, 00:20:19.874 "delay_cmd_submit": true, 00:20:19.874 "transport_retry_count": 4, 00:20:19.874 "bdev_retry_count": 3, 00:20:19.874 "transport_ack_timeout": 0, 00:20:19.874 "ctrlr_loss_timeout_sec": 0, 00:20:19.874 "reconnect_delay_sec": 0, 00:20:19.874 "fast_io_fail_timeout_sec": 0, 00:20:19.874 "disable_auto_failback": false, 00:20:19.874 "generate_uuids": false, 00:20:19.874 "transport_tos": 0, 00:20:19.874 "nvme_error_stat": false, 00:20:19.874 "rdma_srq_size": 0, 00:20:19.874 "io_path_stat": false, 00:20:19.874 "allow_accel_sequence": false, 00:20:19.874 "rdma_max_cq_size": 0, 00:20:19.874 "rdma_cm_event_timeout_ms": 0, 00:20:19.874 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.874 "dhchap_digests": [ 00:20:19.874 "sha256", 00:20:19.874 "sha384", 00:20:19.874 "sha512" 00:20:19.874 ], 00:20:19.874 "dhchap_dhgroups": [ 00:20:19.874 "null", 00:20:19.874 "ffdhe2048", 00:20:19.874 "ffdhe3072", 00:20:19.875 "ffdhe4096", 00:20:19.875 "ffdhe6144", 00:20:19.875 "ffdhe8192" 00:20:19.875 ] 00:20:19.875 } 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "method": "bdev_nvme_set_hotplug", 00:20:19.875 "params": { 00:20:19.875 "period_us": 100000, 00:20:19.875 "enable": false 00:20:19.875 } 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "method": "bdev_malloc_create", 00:20:19.875 "params": { 00:20:19.875 "name": "malloc0", 00:20:19.875 "num_blocks": 8192, 00:20:19.875 "block_size": 4096, 00:20:19.875 "physical_block_size": 4096, 00:20:19.875 "uuid": "3facc840-b736-4dde-b3b9-379a2021f192", 00:20:19.875 "optimal_io_boundary": 0, 00:20:19.875 "md_size": 0, 00:20:19.875 "dif_type": 0, 00:20:19.875 "dif_is_head_of_md": false, 00:20:19.875 "dif_pi_format": 0 00:20:19.875 } 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "method": "bdev_wait_for_examine" 00:20:19.875 } 00:20:19.875 ] 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "subsystem": "nbd", 00:20:19.875 "config": [] 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "subsystem": "scheduler", 00:20:19.875 "config": [ 00:20:19.875 { 00:20:19.875 "method": "framework_set_scheduler", 00:20:19.875 "params": { 00:20:19.875 "name": "static" 00:20:19.875 } 00:20:19.875 } 00:20:19.875 ] 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "subsystem": "nvmf", 00:20:19.875 "config": [ 00:20:19.875 { 00:20:19.875 "method": "nvmf_set_config", 00:20:19.875 "params": { 00:20:19.875 "discovery_filter": "match_any", 00:20:19.875 "admin_cmd_passthru": { 00:20:19.875 "identify_ctrlr": false 00:20:19.875 }, 00:20:19.875 "dhchap_digests": [ 00:20:19.875 "sha256", 00:20:19.875 "sha384", 00:20:19.875 "sha512" 00:20:19.875 ], 00:20:19.875 "dhchap_dhgroups": [ 00:20:19.875 "null", 00:20:19.875 "ffdhe2048", 00:20:19.875 "ffdhe3072", 00:20:19.875 "ffdhe4096", 00:20:19.875 "ffdhe6144", 00:20:19.875 "ffdhe8192" 00:20:19.875 ] 00:20:19.875 } 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "method": "nvmf_set_max_subsystems", 00:20:19.875 "params": { 00:20:19.875 "max_subsystems": 1024 00:20:19.875 } 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "method": "nvmf_set_crdt", 00:20:19.875 "params": { 00:20:19.875 "crdt1": 0, 00:20:19.875 "crdt2": 0, 00:20:19.875 "crdt3": 0 00:20:19.875 } 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "method": "nvmf_create_transport", 00:20:19.875 "params": { 00:20:19.875 "trtype": "TCP", 00:20:19.875 "max_queue_depth": 128, 00:20:19.875 "max_io_qpairs_per_ctrlr": 127, 00:20:19.875 "in_capsule_data_size": 4096, 00:20:19.875 "max_io_size": 131072, 00:20:19.875 "io_unit_size": 131072, 00:20:19.875 "max_aq_depth": 128, 00:20:19.875 "num_shared_buffers": 511, 00:20:19.875 "buf_cache_size": 4294967295, 00:20:19.875 "dif_insert_or_strip": false, 00:20:19.875 "zcopy": false, 00:20:19.875 "c2h_success": false, 00:20:19.875 "sock_priority": 0, 00:20:19.875 "abort_timeout_sec": 1, 00:20:19.875 "ack_timeout": 0, 00:20:19.875 "data_wr_pool_size": 0 00:20:19.875 } 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "method": "nvmf_create_subsystem", 00:20:19.875 "params": { 00:20:19.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.875 "allow_any_host": false, 00:20:19.875 "serial_number": "SPDK00000000000001", 00:20:19.875 "model_number": "SPDK bdev Controller", 00:20:19.875 "max_namespaces": 10, 00:20:19.875 "min_cntlid": 1, 00:20:19.875 "max_cntlid": 65519, 00:20:19.875 "ana_reporting": false 00:20:19.875 } 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "method": "nvmf_subsystem_add_host", 00:20:19.875 "params": { 00:20:19.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.875 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.875 "psk": "key0" 00:20:19.875 } 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "method": "nvmf_subsystem_add_ns", 00:20:19.875 "params": { 00:20:19.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.875 "namespace": { 00:20:19.875 "nsid": 1, 00:20:19.875 "bdev_name": "malloc0", 00:20:19.875 "nguid": "3FACC840B7364DDEB3B9379A2021F192", 00:20:19.875 "uuid": "3facc840-b736-4dde-b3b9-379a2021f192", 00:20:19.875 "no_auto_visible": false 00:20:19.875 } 00:20:19.875 } 00:20:19.875 }, 00:20:19.875 { 00:20:19.875 "method": "nvmf_subsystem_add_listener", 00:20:19.875 "params": { 00:20:19.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.875 "listen_address": { 00:20:19.875 "trtype": "TCP", 00:20:19.875 "adrfam": "IPv4", 00:20:19.875 "traddr": "10.0.0.2", 00:20:19.875 "trsvcid": "4420" 00:20:19.875 }, 00:20:19.875 "secure_channel": true 00:20:19.875 } 00:20:19.875 } 00:20:19.875 ] 00:20:19.875 } 00:20:19.875 ] 00:20:19.875 }' 00:20:19.875 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1213653 00:20:19.875 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:19.875 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1213653 00:20:19.875 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1213653 ']' 00:20:19.875 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.875 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.875 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.875 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.875 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.875 [2024-12-10 05:45:07.653187] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:19.875 [2024-12-10 05:45:07.653233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.875 [2024-12-10 05:45:07.721889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.875 [2024-12-10 05:45:07.760555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.875 [2024-12-10 05:45:07.760590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.875 [2024-12-10 05:45:07.760598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.875 [2024-12-10 05:45:07.760603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.875 [2024-12-10 05:45:07.760609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.875 [2024-12-10 05:45:07.761109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.134 [2024-12-10 05:45:07.973760] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.134 [2024-12-10 05:45:08.005779] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.134 [2024-12-10 05:45:08.005975] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1213694 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1213694 /var/tmp/bdevperf.sock 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1213694 ']' 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:20.701 "subsystems": [ 00:20:20.701 { 00:20:20.701 "subsystem": "keyring", 00:20:20.701 "config": [ 00:20:20.701 { 00:20:20.701 "method": "keyring_file_add_key", 00:20:20.701 "params": { 00:20:20.701 "name": "key0", 00:20:20.701 "path": "/tmp/tmp.nl7cTjfzQg" 00:20:20.701 } 00:20:20.701 } 00:20:20.701 ] 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "subsystem": "iobuf", 00:20:20.701 "config": [ 00:20:20.701 { 00:20:20.701 "method": "iobuf_set_options", 00:20:20.701 "params": { 00:20:20.701 "small_pool_count": 8192, 00:20:20.701 "large_pool_count": 1024, 00:20:20.701 "small_bufsize": 8192, 00:20:20.701 "large_bufsize": 135168, 00:20:20.701 "enable_numa": false 00:20:20.701 } 00:20:20.701 } 00:20:20.701 ] 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "subsystem": "sock", 00:20:20.701 "config": [ 00:20:20.701 { 00:20:20.701 "method": "sock_set_default_impl", 00:20:20.701 "params": { 00:20:20.701 "impl_name": "posix" 00:20:20.701 } 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "method": "sock_impl_set_options", 00:20:20.701 "params": { 00:20:20.701 "impl_name": "ssl", 00:20:20.701 "recv_buf_size": 4096, 00:20:20.701 "send_buf_size": 4096, 00:20:20.701 "enable_recv_pipe": true, 00:20:20.701 "enable_quickack": false, 00:20:20.701 "enable_placement_id": 0, 00:20:20.701 "enable_zerocopy_send_server": true, 00:20:20.701 "enable_zerocopy_send_client": false, 00:20:20.701 "zerocopy_threshold": 0, 00:20:20.701 "tls_version": 0, 00:20:20.701 "enable_ktls": false 00:20:20.701 } 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "method": "sock_impl_set_options", 00:20:20.701 "params": { 00:20:20.701 "impl_name": "posix", 00:20:20.701 "recv_buf_size": 2097152, 00:20:20.701 "send_buf_size": 2097152, 00:20:20.701 "enable_recv_pipe": true, 00:20:20.701 "enable_quickack": false, 00:20:20.701 "enable_placement_id": 0, 00:20:20.701 "enable_zerocopy_send_server": true, 00:20:20.701 "enable_zerocopy_send_client": false, 00:20:20.701 "zerocopy_threshold": 0, 00:20:20.701 "tls_version": 0, 00:20:20.701 "enable_ktls": false 00:20:20.701 } 00:20:20.701 } 00:20:20.701 ] 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "subsystem": "vmd", 00:20:20.701 "config": [] 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "subsystem": "accel", 00:20:20.701 "config": [ 00:20:20.701 { 00:20:20.701 "method": "accel_set_options", 00:20:20.701 "params": { 00:20:20.701 "small_cache_size": 128, 00:20:20.701 "large_cache_size": 16, 00:20:20.701 "task_count": 2048, 00:20:20.701 "sequence_count": 2048, 00:20:20.701 "buf_count": 2048 00:20:20.701 } 00:20:20.701 } 00:20:20.701 ] 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "subsystem": "bdev", 00:20:20.701 "config": [ 00:20:20.701 { 00:20:20.701 "method": "bdev_set_options", 00:20:20.701 "params": { 00:20:20.701 "bdev_io_pool_size": 65535, 00:20:20.701 "bdev_io_cache_size": 256, 00:20:20.701 "bdev_auto_examine": true, 00:20:20.701 "iobuf_small_cache_size": 128, 00:20:20.701 "iobuf_large_cache_size": 16 00:20:20.701 } 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "method": "bdev_raid_set_options", 00:20:20.701 "params": { 00:20:20.701 "process_window_size_kb": 1024, 00:20:20.701 "process_max_bandwidth_mb_sec": 0 00:20:20.701 } 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "method": "bdev_iscsi_set_options", 00:20:20.701 "params": { 00:20:20.701 "timeout_sec": 30 00:20:20.701 } 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "method": "bdev_nvme_set_options", 00:20:20.701 "params": { 00:20:20.701 "action_on_timeout": "none", 00:20:20.701 "timeout_us": 0, 00:20:20.701 "timeout_admin_us": 0, 00:20:20.701 "keep_alive_timeout_ms": 10000, 00:20:20.701 "arbitration_burst": 0, 00:20:20.701 "low_priority_weight": 0, 00:20:20.701 "medium_priority_weight": 0, 00:20:20.701 "high_priority_weight": 0, 00:20:20.701 "nvme_adminq_poll_period_us": 10000, 00:20:20.701 "nvme_ioq_poll_period_us": 0, 00:20:20.701 "io_queue_requests": 512, 00:20:20.701 "delay_cmd_submit": true, 00:20:20.701 "transport_retry_count": 4, 00:20:20.701 "bdev_retry_count": 3, 00:20:20.701 "transport_ack_timeout": 0, 00:20:20.701 "ctrlr_loss_timeout_sec": 0, 00:20:20.701 "reconnect_delay_sec": 0, 00:20:20.701 "fast_io_fail_timeout_sec": 0, 00:20:20.701 "disable_auto_failback": false, 00:20:20.701 "generate_uuids": false, 00:20:20.701 "transport_tos": 0, 00:20:20.701 "nvme_error_stat": false, 00:20:20.701 "rdma_srq_size": 0, 00:20:20.701 "io_path_stat": false, 00:20:20.701 "allow_accel_sequence": false, 00:20:20.701 "rdma_max_cq_size": 0, 00:20:20.701 "rdma_cm_event_timeout_ms": 0, 00:20:20.701 "dhchap_digests": [ 00:20:20.701 "sha256", 00:20:20.701 "sha384", 00:20:20.701 "sha512" 00:20:20.701 ], 00:20:20.701 "dhchap_dhgroups": [ 00:20:20.701 "null", 00:20:20.701 "ffdhe2048", 00:20:20.701 "ffdhe3072", 00:20:20.701 "ffdhe4096", 00:20:20.701 "ffdhe6144", 00:20:20.701 "ffdhe8192" 00:20:20.701 ] 00:20:20.701 } 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "method": "bdev_nvme_attach_controller", 00:20:20.701 "params": { 00:20:20.701 "name": "TLSTEST", 00:20:20.701 "trtype": "TCP", 00:20:20.701 "adrfam": "IPv4", 00:20:20.701 "traddr": "10.0.0.2", 00:20:20.701 "trsvcid": "4420", 00:20:20.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.701 "prchk_reftag": false, 00:20:20.701 "prchk_guard": false, 00:20:20.701 "ctrlr_loss_timeout_sec": 0, 00:20:20.701 "reconnect_delay_sec": 0, 00:20:20.701 "fast_io_fail_timeout_sec": 0, 00:20:20.701 "psk": "key0", 00:20:20.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.701 "hdgst": false, 00:20:20.701 "ddgst": false, 00:20:20.701 "multipath": "multipath" 00:20:20.701 } 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "method": "bdev_nvme_set_hotplug", 00:20:20.701 "params": { 00:20:20.701 "period_us": 100000, 00:20:20.701 "enable": false 00:20:20.701 } 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "method": "bdev_wait_for_examine" 00:20:20.701 } 00:20:20.701 ] 00:20:20.701 }, 00:20:20.701 { 00:20:20.701 "subsystem": "nbd", 00:20:20.701 "config": [] 00:20:20.701 } 00:20:20.701 ] 00:20:20.701 }' 00:20:20.701 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.702 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.702 [2024-12-10 05:45:08.571326] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:20.702 [2024-12-10 05:45:08.571376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213694 ] 00:20:20.960 [2024-12-10 05:45:08.644800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.960 [2024-12-10 05:45:08.685923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.960 [2024-12-10 05:45:08.839872] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.527 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.527 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:21.527 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:21.785 Running I/O for 10 seconds... 00:20:23.653 5181.00 IOPS, 20.24 MiB/s [2024-12-10T04:45:12.922Z] 5354.00 IOPS, 20.91 MiB/s [2024-12-10T04:45:13.855Z] 5303.33 IOPS, 20.72 MiB/s [2024-12-10T04:45:14.791Z] 5367.50 IOPS, 20.97 MiB/s [2024-12-10T04:45:15.724Z] 5412.60 IOPS, 21.14 MiB/s [2024-12-10T04:45:16.659Z] 5444.17 IOPS, 21.27 MiB/s [2024-12-10T04:45:17.593Z] 5452.00 IOPS, 21.30 MiB/s [2024-12-10T04:45:18.527Z] 5453.88 IOPS, 21.30 MiB/s [2024-12-10T04:45:19.902Z] 5467.89 IOPS, 21.36 MiB/s [2024-12-10T04:45:19.902Z] 5475.60 IOPS, 21.39 MiB/s 00:20:32.006 Latency(us) 00:20:32.006 [2024-12-10T04:45:19.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.006 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:32.006 Verification LBA range: start 0x0 length 0x2000 00:20:32.006 TLSTESTn1 : 10.01 5480.04 21.41 0.00 0.00 23321.01 5055.63 22968.81 00:20:32.006 [2024-12-10T04:45:19.902Z] =================================================================================================================== 00:20:32.006 [2024-12-10T04:45:19.902Z] Total : 5480.04 21.41 0.00 0.00 23321.01 5055.63 22968.81 00:20:32.006 { 00:20:32.006 "results": [ 00:20:32.006 { 00:20:32.006 "job": "TLSTESTn1", 00:20:32.006 "core_mask": "0x4", 00:20:32.006 "workload": "verify", 00:20:32.006 "status": "finished", 00:20:32.006 "verify_range": { 00:20:32.006 "start": 0, 00:20:32.006 "length": 8192 00:20:32.006 }, 00:20:32.006 "queue_depth": 128, 00:20:32.006 "io_size": 4096, 00:20:32.006 "runtime": 10.014885, 00:20:32.006 "iops": 5480.042956059905, 00:20:32.006 "mibps": 21.406417797109004, 00:20:32.006 "io_failed": 0, 00:20:32.006 "io_timeout": 0, 00:20:32.006 "avg_latency_us": 23321.01382116784, 00:20:32.006 "min_latency_us": 5055.634285714285, 00:20:32.006 "max_latency_us": 22968.80761904762 00:20:32.006 } 00:20:32.006 ], 00:20:32.006 "core_count": 1 00:20:32.006 } 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1213694 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1213694 ']' 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1213694 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1213694 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1213694' 00:20:32.006 killing process with pid 1213694 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1213694 00:20:32.006 Received shutdown signal, test time was about 10.000000 seconds 00:20:32.006 00:20:32.006 Latency(us) 00:20:32.006 [2024-12-10T04:45:19.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.006 [2024-12-10T04:45:19.902Z] =================================================================================================================== 00:20:32.006 [2024-12-10T04:45:19.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1213694 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1213653 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1213653 ']' 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1213653 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1213653 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1213653' 00:20:32.006 killing process with pid 1213653 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1213653 00:20:32.006 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1213653 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1215670 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1215670 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1215670 ']' 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.264 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.264 [2024-12-10 05:45:20.044302] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:32.264 [2024-12-10 05:45:20.044352] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.264 [2024-12-10 05:45:20.121476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.522 [2024-12-10 05:45:20.160086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.522 [2024-12-10 05:45:20.160122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.522 [2024-12-10 05:45:20.160130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.522 [2024-12-10 05:45:20.160136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.522 [2024-12-10 05:45:20.160142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.522 [2024-12-10 05:45:20.160620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.522 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.522 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:32.522 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:32.522 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.522 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.522 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.522 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.nl7cTjfzQg 00:20:32.522 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nl7cTjfzQg 00:20:32.522 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:32.780 [2024-12-10 05:45:20.464717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.780 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:33.038 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:33.038 [2024-12-10 05:45:20.861716] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.038 [2024-12-10 05:45:20.861911] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.038 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:33.298 malloc0 00:20:33.298 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:33.556 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nl7cTjfzQg 00:20:33.814 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1215952 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1215952 /var/tmp/bdevperf.sock 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1215952 ']' 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.102 [2024-12-10 05:45:21.785714] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:34.102 [2024-12-10 05:45:21.785764] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215952 ] 00:20:34.102 [2024-12-10 05:45:21.840156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.102 [2024-12-10 05:45:21.879311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:34.102 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nl7cTjfzQg 00:20:34.378 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:34.660 [2024-12-10 05:45:22.334849] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.660 nvme0n1 00:20:34.660 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:34.660 Running I/O for 1 seconds... 00:20:36.064 5491.00 IOPS, 21.45 MiB/s 00:20:36.064 Latency(us) 00:20:36.064 [2024-12-10T04:45:23.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.064 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:36.064 Verification LBA range: start 0x0 length 0x2000 00:20:36.064 nvme0n1 : 1.01 5547.64 21.67 0.00 0.00 22920.62 5336.50 19848.05 00:20:36.064 [2024-12-10T04:45:23.960Z] =================================================================================================================== 00:20:36.064 [2024-12-10T04:45:23.960Z] Total : 5547.64 21.67 0.00 0.00 22920.62 5336.50 19848.05 00:20:36.064 { 00:20:36.064 "results": [ 00:20:36.064 { 00:20:36.064 "job": "nvme0n1", 00:20:36.064 "core_mask": "0x2", 00:20:36.064 "workload": "verify", 00:20:36.064 "status": "finished", 00:20:36.064 "verify_range": { 00:20:36.064 "start": 0, 00:20:36.064 "length": 8192 00:20:36.064 }, 00:20:36.064 "queue_depth": 128, 00:20:36.064 "io_size": 4096, 00:20:36.064 "runtime": 1.013044, 00:20:36.064 "iops": 5547.63662782663, 00:20:36.064 "mibps": 21.670455577447772, 00:20:36.065 "io_failed": 0, 00:20:36.065 "io_timeout": 0, 00:20:36.065 "avg_latency_us": 22920.62461006609, 00:20:36.065 "min_latency_us": 5336.5028571428575, 00:20:36.065 "max_latency_us": 19848.045714285716 00:20:36.065 } 00:20:36.065 ], 00:20:36.065 "core_count": 1 00:20:36.065 } 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1215952 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1215952 ']' 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1215952 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1215952 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1215952' 00:20:36.065 killing process with pid 1215952 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1215952 00:20:36.065 Received shutdown signal, test time was about 1.000000 seconds 00:20:36.065 00:20:36.065 Latency(us) 00:20:36.065 [2024-12-10T04:45:23.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.065 [2024-12-10T04:45:23.961Z] =================================================================================================================== 00:20:36.065 [2024-12-10T04:45:23.961Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1215952 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1215670 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1215670 ']' 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1215670 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1215670 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1215670' 00:20:36.065 killing process with pid 1215670 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1215670 00:20:36.065 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1215670 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1216282 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1216282 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1216282 ']' 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.323 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.323 [2024-12-10 05:45:24.030327] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:36.323 [2024-12-10 05:45:24.030376] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.323 [2024-12-10 05:45:24.111964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.323 [2024-12-10 05:45:24.152474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.323 [2024-12-10 05:45:24.152510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.323 [2024-12-10 05:45:24.152517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.323 [2024-12-10 05:45:24.152523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.323 [2024-12-10 05:45:24.152528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.323 [2024-12-10 05:45:24.152987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.257 [2024-12-10 05:45:24.904402] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.257 malloc0 00:20:37.257 [2024-12-10 05:45:24.932486] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:37.257 [2024-12-10 05:45:24.932684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1216438 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1216438 /var/tmp/bdevperf.sock 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1216438 ']' 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.257 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.257 [2024-12-10 05:45:25.009402] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:37.257 [2024-12-10 05:45:25.009446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216438 ] 00:20:37.257 [2024-12-10 05:45:25.084305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.257 [2024-12-10 05:45:25.123212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.515 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.515 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:37.515 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nl7cTjfzQg 00:20:37.772 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:37.772 [2024-12-10 05:45:25.582904] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.772 nvme0n1 00:20:38.029 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:38.029 Running I/O for 1 seconds... 00:20:38.962 5369.00 IOPS, 20.97 MiB/s 00:20:38.962 Latency(us) 00:20:38.962 [2024-12-10T04:45:26.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.962 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:38.962 Verification LBA range: start 0x0 length 0x2000 00:20:38.962 nvme0n1 : 1.01 5428.31 21.20 0.00 0.00 23423.18 5398.92 23468.13 00:20:38.962 [2024-12-10T04:45:26.858Z] =================================================================================================================== 00:20:38.962 [2024-12-10T04:45:26.858Z] Total : 5428.31 21.20 0.00 0.00 23423.18 5398.92 23468.13 00:20:38.962 { 00:20:38.962 "results": [ 00:20:38.962 { 00:20:38.962 "job": "nvme0n1", 00:20:38.962 "core_mask": "0x2", 00:20:38.962 "workload": "verify", 00:20:38.962 "status": "finished", 00:20:38.962 "verify_range": { 00:20:38.962 "start": 0, 00:20:38.962 "length": 8192 00:20:38.962 }, 00:20:38.962 "queue_depth": 128, 00:20:38.962 "io_size": 4096, 00:20:38.962 "runtime": 1.012839, 00:20:38.962 "iops": 5428.305979528829, 00:20:38.962 "mibps": 21.20432023253449, 00:20:38.962 "io_failed": 0, 00:20:38.962 "io_timeout": 0, 00:20:38.962 "avg_latency_us": 23423.1770430806, 00:20:38.962 "min_latency_us": 5398.918095238095, 00:20:38.962 "max_latency_us": 23468.129523809523 00:20:38.962 } 00:20:38.962 ], 00:20:38.962 "core_count": 1 00:20:38.962 } 00:20:38.962 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:38.962 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.962 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.220 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.220 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:39.220 "subsystems": [ 00:20:39.220 { 00:20:39.220 "subsystem": "keyring", 00:20:39.220 "config": [ 00:20:39.220 { 00:20:39.220 "method": "keyring_file_add_key", 00:20:39.220 "params": { 00:20:39.220 "name": "key0", 00:20:39.220 "path": "/tmp/tmp.nl7cTjfzQg" 00:20:39.220 } 00:20:39.220 } 00:20:39.220 ] 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "subsystem": "iobuf", 00:20:39.220 "config": [ 00:20:39.220 { 00:20:39.220 "method": "iobuf_set_options", 00:20:39.220 "params": { 00:20:39.220 "small_pool_count": 8192, 00:20:39.220 "large_pool_count": 1024, 00:20:39.220 "small_bufsize": 8192, 00:20:39.220 "large_bufsize": 135168, 00:20:39.220 "enable_numa": false 00:20:39.220 } 00:20:39.220 } 00:20:39.220 ] 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "subsystem": "sock", 00:20:39.220 "config": [ 00:20:39.220 { 00:20:39.220 "method": "sock_set_default_impl", 00:20:39.220 "params": { 00:20:39.220 "impl_name": "posix" 00:20:39.220 } 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "method": "sock_impl_set_options", 00:20:39.220 "params": { 00:20:39.220 "impl_name": "ssl", 00:20:39.220 "recv_buf_size": 4096, 00:20:39.220 "send_buf_size": 4096, 00:20:39.220 "enable_recv_pipe": true, 00:20:39.220 "enable_quickack": false, 00:20:39.220 "enable_placement_id": 0, 00:20:39.220 "enable_zerocopy_send_server": true, 00:20:39.220 "enable_zerocopy_send_client": false, 00:20:39.220 "zerocopy_threshold": 0, 00:20:39.220 "tls_version": 0, 00:20:39.220 "enable_ktls": false 00:20:39.220 } 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "method": "sock_impl_set_options", 00:20:39.220 "params": { 00:20:39.220 "impl_name": "posix", 00:20:39.220 "recv_buf_size": 2097152, 00:20:39.220 "send_buf_size": 2097152, 00:20:39.220 "enable_recv_pipe": true, 00:20:39.220 "enable_quickack": false, 00:20:39.220 "enable_placement_id": 0, 00:20:39.220 "enable_zerocopy_send_server": true, 00:20:39.220 "enable_zerocopy_send_client": false, 00:20:39.220 "zerocopy_threshold": 0, 00:20:39.220 "tls_version": 0, 00:20:39.220 "enable_ktls": false 00:20:39.220 } 00:20:39.220 } 00:20:39.220 ] 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "subsystem": "vmd", 00:20:39.220 "config": [] 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "subsystem": "accel", 00:20:39.220 "config": [ 00:20:39.220 { 00:20:39.220 "method": "accel_set_options", 00:20:39.220 "params": { 00:20:39.220 "small_cache_size": 128, 00:20:39.220 "large_cache_size": 16, 00:20:39.220 "task_count": 2048, 00:20:39.220 "sequence_count": 2048, 00:20:39.220 "buf_count": 2048 00:20:39.220 } 00:20:39.220 } 00:20:39.220 ] 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "subsystem": "bdev", 00:20:39.220 "config": [ 00:20:39.220 { 00:20:39.220 "method": "bdev_set_options", 00:20:39.220 "params": { 00:20:39.220 "bdev_io_pool_size": 65535, 00:20:39.220 "bdev_io_cache_size": 256, 00:20:39.220 "bdev_auto_examine": true, 00:20:39.220 "iobuf_small_cache_size": 128, 00:20:39.220 "iobuf_large_cache_size": 16 00:20:39.220 } 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "method": "bdev_raid_set_options", 00:20:39.220 "params": { 00:20:39.220 "process_window_size_kb": 1024, 00:20:39.220 "process_max_bandwidth_mb_sec": 0 00:20:39.220 } 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "method": "bdev_iscsi_set_options", 00:20:39.220 "params": { 00:20:39.220 "timeout_sec": 30 00:20:39.220 } 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "method": "bdev_nvme_set_options", 00:20:39.220 "params": { 00:20:39.220 "action_on_timeout": "none", 00:20:39.220 "timeout_us": 0, 00:20:39.220 "timeout_admin_us": 0, 00:20:39.220 "keep_alive_timeout_ms": 10000, 00:20:39.220 "arbitration_burst": 0, 00:20:39.220 "low_priority_weight": 0, 00:20:39.220 "medium_priority_weight": 0, 00:20:39.220 "high_priority_weight": 0, 00:20:39.220 "nvme_adminq_poll_period_us": 10000, 00:20:39.220 "nvme_ioq_poll_period_us": 0, 00:20:39.220 "io_queue_requests": 0, 00:20:39.220 "delay_cmd_submit": true, 00:20:39.220 "transport_retry_count": 4, 00:20:39.220 "bdev_retry_count": 3, 00:20:39.220 "transport_ack_timeout": 0, 00:20:39.220 "ctrlr_loss_timeout_sec": 0, 00:20:39.220 "reconnect_delay_sec": 0, 00:20:39.220 "fast_io_fail_timeout_sec": 0, 00:20:39.220 "disable_auto_failback": false, 00:20:39.220 "generate_uuids": false, 00:20:39.220 "transport_tos": 0, 00:20:39.220 "nvme_error_stat": false, 00:20:39.220 "rdma_srq_size": 0, 00:20:39.220 "io_path_stat": false, 00:20:39.220 "allow_accel_sequence": false, 00:20:39.220 "rdma_max_cq_size": 0, 00:20:39.220 "rdma_cm_event_timeout_ms": 0, 00:20:39.220 "dhchap_digests": [ 00:20:39.220 "sha256", 00:20:39.220 "sha384", 00:20:39.220 "sha512" 00:20:39.220 ], 00:20:39.220 "dhchap_dhgroups": [ 00:20:39.220 "null", 00:20:39.220 "ffdhe2048", 00:20:39.220 "ffdhe3072", 00:20:39.220 "ffdhe4096", 00:20:39.220 "ffdhe6144", 00:20:39.220 "ffdhe8192" 00:20:39.220 ] 00:20:39.220 } 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "method": "bdev_nvme_set_hotplug", 00:20:39.220 "params": { 00:20:39.220 "period_us": 100000, 00:20:39.220 "enable": false 00:20:39.220 } 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "method": "bdev_malloc_create", 00:20:39.220 "params": { 00:20:39.220 "name": "malloc0", 00:20:39.220 "num_blocks": 8192, 00:20:39.220 "block_size": 4096, 00:20:39.220 "physical_block_size": 4096, 00:20:39.220 "uuid": "3ed2fe77-2214-41ec-b4fc-1f7616c203ea", 00:20:39.220 "optimal_io_boundary": 0, 00:20:39.220 "md_size": 0, 00:20:39.220 "dif_type": 0, 00:20:39.220 "dif_is_head_of_md": false, 00:20:39.220 "dif_pi_format": 0 00:20:39.220 } 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "method": "bdev_wait_for_examine" 00:20:39.220 } 00:20:39.220 ] 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "subsystem": "nbd", 00:20:39.220 "config": [] 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "subsystem": "scheduler", 00:20:39.220 "config": [ 00:20:39.220 { 00:20:39.220 "method": "framework_set_scheduler", 00:20:39.220 "params": { 00:20:39.220 "name": "static" 00:20:39.220 } 00:20:39.220 } 00:20:39.220 ] 00:20:39.220 }, 00:20:39.220 { 00:20:39.220 "subsystem": "nvmf", 00:20:39.220 "config": [ 00:20:39.221 { 00:20:39.221 "method": "nvmf_set_config", 00:20:39.221 "params": { 00:20:39.221 "discovery_filter": "match_any", 00:20:39.221 "admin_cmd_passthru": { 00:20:39.221 "identify_ctrlr": false 00:20:39.221 }, 00:20:39.221 "dhchap_digests": [ 00:20:39.221 "sha256", 00:20:39.221 "sha384", 00:20:39.221 "sha512" 00:20:39.221 ], 00:20:39.221 "dhchap_dhgroups": [ 00:20:39.221 "null", 00:20:39.221 "ffdhe2048", 00:20:39.221 "ffdhe3072", 00:20:39.221 "ffdhe4096", 00:20:39.221 "ffdhe6144", 00:20:39.221 "ffdhe8192" 00:20:39.221 ] 00:20:39.221 } 00:20:39.221 }, 00:20:39.221 { 00:20:39.221 "method": "nvmf_set_max_subsystems", 00:20:39.221 "params": { 00:20:39.221 "max_subsystems": 1024 00:20:39.221 } 00:20:39.221 }, 00:20:39.221 { 00:20:39.221 "method": "nvmf_set_crdt", 00:20:39.221 "params": { 00:20:39.221 "crdt1": 0, 00:20:39.221 "crdt2": 0, 00:20:39.221 "crdt3": 0 00:20:39.221 } 00:20:39.221 }, 00:20:39.221 { 00:20:39.221 "method": "nvmf_create_transport", 00:20:39.221 "params": { 00:20:39.221 "trtype": "TCP", 00:20:39.221 "max_queue_depth": 128, 00:20:39.221 "max_io_qpairs_per_ctrlr": 127, 00:20:39.221 "in_capsule_data_size": 4096, 00:20:39.221 "max_io_size": 131072, 00:20:39.221 "io_unit_size": 131072, 00:20:39.221 "max_aq_depth": 128, 00:20:39.221 "num_shared_buffers": 511, 00:20:39.221 "buf_cache_size": 4294967295, 00:20:39.221 "dif_insert_or_strip": false, 00:20:39.221 "zcopy": false, 00:20:39.221 "c2h_success": false, 00:20:39.221 "sock_priority": 0, 00:20:39.221 "abort_timeout_sec": 1, 00:20:39.221 "ack_timeout": 0, 00:20:39.221 "data_wr_pool_size": 0 00:20:39.221 } 00:20:39.221 }, 00:20:39.221 { 00:20:39.221 "method": "nvmf_create_subsystem", 00:20:39.221 "params": { 00:20:39.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.221 "allow_any_host": false, 00:20:39.221 "serial_number": "00000000000000000000", 00:20:39.221 "model_number": "SPDK bdev Controller", 00:20:39.221 "max_namespaces": 32, 00:20:39.221 "min_cntlid": 1, 00:20:39.221 "max_cntlid": 65519, 00:20:39.221 "ana_reporting": false 00:20:39.221 } 00:20:39.221 }, 00:20:39.221 { 00:20:39.221 "method": "nvmf_subsystem_add_host", 00:20:39.221 "params": { 00:20:39.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.221 "host": "nqn.2016-06.io.spdk:host1", 00:20:39.221 "psk": "key0" 00:20:39.221 } 00:20:39.221 }, 00:20:39.221 { 00:20:39.221 "method": "nvmf_subsystem_add_ns", 00:20:39.221 "params": { 00:20:39.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.221 "namespace": { 00:20:39.221 "nsid": 1, 00:20:39.221 "bdev_name": "malloc0", 00:20:39.221 "nguid": "3ED2FE77221441ECB4FC1F7616C203EA", 00:20:39.221 "uuid": "3ed2fe77-2214-41ec-b4fc-1f7616c203ea", 00:20:39.221 "no_auto_visible": false 00:20:39.221 } 00:20:39.221 } 00:20:39.221 }, 00:20:39.221 { 00:20:39.221 "method": "nvmf_subsystem_add_listener", 00:20:39.221 "params": { 00:20:39.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.221 "listen_address": { 00:20:39.221 "trtype": "TCP", 00:20:39.221 "adrfam": "IPv4", 00:20:39.221 "traddr": "10.0.0.2", 00:20:39.221 "trsvcid": "4420" 00:20:39.221 }, 00:20:39.221 "secure_channel": false, 00:20:39.221 "sock_impl": "ssl" 00:20:39.221 } 00:20:39.221 } 00:20:39.221 ] 00:20:39.221 } 00:20:39.221 ] 00:20:39.221 }' 00:20:39.221 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:39.479 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:39.479 "subsystems": [ 00:20:39.479 { 00:20:39.479 "subsystem": "keyring", 00:20:39.479 "config": [ 00:20:39.479 { 00:20:39.479 "method": "keyring_file_add_key", 00:20:39.479 "params": { 00:20:39.479 "name": "key0", 00:20:39.479 "path": "/tmp/tmp.nl7cTjfzQg" 00:20:39.479 } 00:20:39.479 } 00:20:39.479 ] 00:20:39.479 }, 00:20:39.479 { 00:20:39.479 "subsystem": "iobuf", 00:20:39.479 "config": [ 00:20:39.479 { 00:20:39.479 "method": "iobuf_set_options", 00:20:39.479 "params": { 00:20:39.479 "small_pool_count": 8192, 00:20:39.479 "large_pool_count": 1024, 00:20:39.479 "small_bufsize": 8192, 00:20:39.479 "large_bufsize": 135168, 00:20:39.479 "enable_numa": false 00:20:39.479 } 00:20:39.479 } 00:20:39.479 ] 00:20:39.479 }, 00:20:39.479 { 00:20:39.479 "subsystem": "sock", 00:20:39.479 "config": [ 00:20:39.479 { 00:20:39.479 "method": "sock_set_default_impl", 00:20:39.479 "params": { 00:20:39.479 "impl_name": "posix" 00:20:39.479 } 00:20:39.479 }, 00:20:39.479 { 00:20:39.479 "method": "sock_impl_set_options", 00:20:39.479 "params": { 00:20:39.479 "impl_name": "ssl", 00:20:39.479 "recv_buf_size": 4096, 00:20:39.479 "send_buf_size": 4096, 00:20:39.479 "enable_recv_pipe": true, 00:20:39.479 "enable_quickack": false, 00:20:39.479 "enable_placement_id": 0, 00:20:39.479 "enable_zerocopy_send_server": true, 00:20:39.479 "enable_zerocopy_send_client": false, 00:20:39.479 "zerocopy_threshold": 0, 00:20:39.479 "tls_version": 0, 00:20:39.479 "enable_ktls": false 00:20:39.479 } 00:20:39.479 }, 00:20:39.479 { 00:20:39.479 "method": "sock_impl_set_options", 00:20:39.479 "params": { 00:20:39.480 "impl_name": "posix", 00:20:39.480 "recv_buf_size": 2097152, 00:20:39.480 "send_buf_size": 2097152, 00:20:39.480 "enable_recv_pipe": true, 00:20:39.480 "enable_quickack": false, 00:20:39.480 "enable_placement_id": 0, 00:20:39.480 "enable_zerocopy_send_server": true, 00:20:39.480 "enable_zerocopy_send_client": false, 00:20:39.480 "zerocopy_threshold": 0, 00:20:39.480 "tls_version": 0, 00:20:39.480 "enable_ktls": false 00:20:39.480 } 00:20:39.480 } 00:20:39.480 ] 00:20:39.480 }, 00:20:39.480 { 00:20:39.480 "subsystem": "vmd", 00:20:39.480 "config": [] 00:20:39.480 }, 00:20:39.480 { 00:20:39.480 "subsystem": "accel", 00:20:39.480 "config": [ 00:20:39.480 { 00:20:39.480 "method": "accel_set_options", 00:20:39.480 "params": { 00:20:39.480 "small_cache_size": 128, 00:20:39.480 "large_cache_size": 16, 00:20:39.480 "task_count": 2048, 00:20:39.480 "sequence_count": 2048, 00:20:39.480 "buf_count": 2048 00:20:39.480 } 00:20:39.480 } 00:20:39.480 ] 00:20:39.480 }, 00:20:39.480 { 00:20:39.480 "subsystem": "bdev", 00:20:39.480 "config": [ 00:20:39.480 { 00:20:39.480 "method": "bdev_set_options", 00:20:39.480 "params": { 00:20:39.480 "bdev_io_pool_size": 65535, 00:20:39.480 "bdev_io_cache_size": 256, 00:20:39.480 "bdev_auto_examine": true, 00:20:39.480 "iobuf_small_cache_size": 128, 00:20:39.480 "iobuf_large_cache_size": 16 00:20:39.480 } 00:20:39.480 }, 00:20:39.480 { 00:20:39.480 "method": "bdev_raid_set_options", 00:20:39.480 "params": { 00:20:39.480 "process_window_size_kb": 1024, 00:20:39.480 "process_max_bandwidth_mb_sec": 0 00:20:39.480 } 00:20:39.480 }, 00:20:39.480 { 00:20:39.480 "method": "bdev_iscsi_set_options", 00:20:39.480 "params": { 00:20:39.480 "timeout_sec": 30 00:20:39.480 } 00:20:39.480 }, 00:20:39.480 { 00:20:39.480 "method": "bdev_nvme_set_options", 00:20:39.480 "params": { 00:20:39.480 "action_on_timeout": "none", 00:20:39.480 "timeout_us": 0, 00:20:39.480 "timeout_admin_us": 0, 00:20:39.480 "keep_alive_timeout_ms": 10000, 00:20:39.480 "arbitration_burst": 0, 00:20:39.480 "low_priority_weight": 0, 00:20:39.480 "medium_priority_weight": 0, 00:20:39.480 "high_priority_weight": 0, 00:20:39.480 "nvme_adminq_poll_period_us": 10000, 00:20:39.480 "nvme_ioq_poll_period_us": 0, 00:20:39.480 "io_queue_requests": 512, 00:20:39.480 "delay_cmd_submit": true, 00:20:39.480 "transport_retry_count": 4, 00:20:39.480 "bdev_retry_count": 3, 00:20:39.480 "transport_ack_timeout": 0, 00:20:39.480 "ctrlr_loss_timeout_sec": 0, 00:20:39.480 "reconnect_delay_sec": 0, 00:20:39.480 "fast_io_fail_timeout_sec": 0, 00:20:39.480 "disable_auto_failback": false, 00:20:39.480 "generate_uuids": false, 00:20:39.480 "transport_tos": 0, 00:20:39.480 "nvme_error_stat": false, 00:20:39.480 "rdma_srq_size": 0, 00:20:39.480 "io_path_stat": false, 00:20:39.480 "allow_accel_sequence": false, 00:20:39.480 "rdma_max_cq_size": 0, 00:20:39.480 "rdma_cm_event_timeout_ms": 0, 00:20:39.480 "dhchap_digests": [ 00:20:39.480 "sha256", 00:20:39.480 "sha384", 00:20:39.480 "sha512" 00:20:39.480 ], 00:20:39.480 "dhchap_dhgroups": [ 00:20:39.480 "null", 00:20:39.480 "ffdhe2048", 00:20:39.480 "ffdhe3072", 00:20:39.480 "ffdhe4096", 00:20:39.480 "ffdhe6144", 00:20:39.480 "ffdhe8192" 00:20:39.480 ] 00:20:39.480 } 00:20:39.480 }, 00:20:39.480 { 00:20:39.480 "method": "bdev_nvme_attach_controller", 00:20:39.480 "params": { 00:20:39.480 "name": "nvme0", 00:20:39.480 "trtype": "TCP", 00:20:39.480 "adrfam": "IPv4", 00:20:39.480 "traddr": "10.0.0.2", 00:20:39.480 "trsvcid": "4420", 00:20:39.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.480 "prchk_reftag": false, 00:20:39.480 "prchk_guard": false, 00:20:39.480 "ctrlr_loss_timeout_sec": 0, 00:20:39.480 "reconnect_delay_sec": 0, 00:20:39.480 "fast_io_fail_timeout_sec": 0, 00:20:39.480 "psk": "key0", 00:20:39.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.480 "hdgst": false, 00:20:39.480 "ddgst": false, 00:20:39.480 "multipath": "multipath" 00:20:39.480 } 00:20:39.480 }, 00:20:39.480 { 00:20:39.480 "method": "bdev_nvme_set_hotplug", 00:20:39.480 "params": { 00:20:39.480 "period_us": 100000, 00:20:39.480 "enable": false 00:20:39.480 } 00:20:39.480 }, 00:20:39.480 { 00:20:39.480 "method": "bdev_enable_histogram", 00:20:39.480 "params": { 00:20:39.480 "name": "nvme0n1", 00:20:39.480 "enable": true 00:20:39.480 } 00:20:39.480 }, 00:20:39.480 { 00:20:39.480 "method": "bdev_wait_for_examine" 00:20:39.480 } 00:20:39.480 ] 00:20:39.480 }, 00:20:39.480 { 00:20:39.480 "subsystem": "nbd", 00:20:39.480 "config": [] 00:20:39.480 } 00:20:39.480 ] 00:20:39.480 }' 00:20:39.480 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1216438 00:20:39.480 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1216438 ']' 00:20:39.480 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1216438 00:20:39.480 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:39.480 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.480 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1216438 00:20:39.480 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:39.480 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:39.480 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1216438' 00:20:39.480 killing process with pid 1216438 00:20:39.480 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1216438 00:20:39.480 Received shutdown signal, test time was about 1.000000 seconds 00:20:39.480 00:20:39.480 Latency(us) 00:20:39.480 [2024-12-10T04:45:27.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.480 [2024-12-10T04:45:27.376Z] =================================================================================================================== 00:20:39.480 [2024-12-10T04:45:27.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.480 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1216438 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1216282 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1216282 ']' 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1216282 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1216282 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1216282' 00:20:39.739 killing process with pid 1216282 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1216282 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1216282 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.739 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:39.739 "subsystems": [ 00:20:39.739 { 00:20:39.739 "subsystem": "keyring", 00:20:39.739 "config": [ 00:20:39.739 { 00:20:39.739 "method": "keyring_file_add_key", 00:20:39.739 "params": { 00:20:39.739 "name": "key0", 00:20:39.739 "path": "/tmp/tmp.nl7cTjfzQg" 00:20:39.739 } 00:20:39.739 } 00:20:39.739 ] 00:20:39.739 }, 00:20:39.739 { 00:20:39.739 "subsystem": "iobuf", 00:20:39.739 "config": [ 00:20:39.739 { 00:20:39.739 "method": "iobuf_set_options", 00:20:39.739 "params": { 00:20:39.739 "small_pool_count": 8192, 00:20:39.739 "large_pool_count": 1024, 00:20:39.739 "small_bufsize": 8192, 00:20:39.739 "large_bufsize": 135168, 00:20:39.739 "enable_numa": false 00:20:39.739 } 00:20:39.739 } 00:20:39.739 ] 00:20:39.739 }, 00:20:39.739 { 00:20:39.739 "subsystem": "sock", 00:20:39.739 "config": [ 00:20:39.739 { 00:20:39.739 "method": "sock_set_default_impl", 00:20:39.739 "params": { 00:20:39.739 "impl_name": "posix" 00:20:39.739 } 00:20:39.739 }, 00:20:39.739 { 00:20:39.739 "method": "sock_impl_set_options", 00:20:39.739 "params": { 00:20:39.739 "impl_name": "ssl", 00:20:39.739 "recv_buf_size": 4096, 00:20:39.739 "send_buf_size": 4096, 00:20:39.739 "enable_recv_pipe": true, 00:20:39.739 "enable_quickack": false, 00:20:39.739 "enable_placement_id": 0, 00:20:39.739 "enable_zerocopy_send_server": true, 00:20:39.739 "enable_zerocopy_send_client": false, 00:20:39.739 "zerocopy_threshold": 0, 00:20:39.739 "tls_version": 0, 00:20:39.739 "enable_ktls": false 00:20:39.739 } 00:20:39.739 }, 00:20:39.739 { 00:20:39.739 "method": "sock_impl_set_options", 00:20:39.739 "params": { 00:20:39.739 "impl_name": "posix", 00:20:39.739 "recv_buf_size": 2097152, 00:20:39.739 "send_buf_size": 2097152, 00:20:39.740 "enable_recv_pipe": true, 00:20:39.740 "enable_quickack": false, 00:20:39.740 "enable_placement_id": 0, 00:20:39.740 "enable_zerocopy_send_server": true, 00:20:39.740 "enable_zerocopy_send_client": false, 00:20:39.740 "zerocopy_threshold": 0, 00:20:39.740 "tls_version": 0, 00:20:39.740 "enable_ktls": false 00:20:39.740 } 00:20:39.740 } 00:20:39.740 ] 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "subsystem": "vmd", 00:20:39.740 "config": [] 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "subsystem": "accel", 00:20:39.740 "config": [ 00:20:39.740 { 00:20:39.740 "method": "accel_set_options", 00:20:39.740 "params": { 00:20:39.740 "small_cache_size": 128, 00:20:39.740 "large_cache_size": 16, 00:20:39.740 "task_count": 2048, 00:20:39.740 "sequence_count": 2048, 00:20:39.740 "buf_count": 2048 00:20:39.740 } 00:20:39.740 } 00:20:39.740 ] 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "subsystem": "bdev", 00:20:39.740 "config": [ 00:20:39.740 { 00:20:39.740 "method": "bdev_set_options", 00:20:39.740 "params": { 00:20:39.740 "bdev_io_pool_size": 65535, 00:20:39.740 "bdev_io_cache_size": 256, 00:20:39.740 "bdev_auto_examine": true, 00:20:39.740 "iobuf_small_cache_size": 128, 00:20:39.740 "iobuf_large_cache_size": 16 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "bdev_raid_set_options", 00:20:39.740 "params": { 00:20:39.740 "process_window_size_kb": 1024, 00:20:39.740 "process_max_bandwidth_mb_sec": 0 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "bdev_iscsi_set_options", 00:20:39.740 "params": { 00:20:39.740 "timeout_sec": 30 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "bdev_nvme_set_options", 00:20:39.740 "params": { 00:20:39.740 "action_on_timeout": "none", 00:20:39.740 "timeout_us": 0, 00:20:39.740 "timeout_admin_us": 0, 00:20:39.740 "keep_alive_timeout_ms": 10000, 00:20:39.740 "arbitration_burst": 0, 00:20:39.740 "low_priority_weight": 0, 00:20:39.740 "medium_priority_weight": 0, 00:20:39.740 "high_priority_weight": 0, 00:20:39.740 "nvme_adminq_poll_period_us": 10000, 00:20:39.740 "nvme_ioq_poll_period_us": 0, 00:20:39.740 "io_queue_requests": 0, 00:20:39.740 "delay_cmd_submit": true, 00:20:39.740 "transport_retry_count": 4, 00:20:39.740 "bdev_retry_count": 3, 00:20:39.740 "transport_ack_timeout": 0, 00:20:39.740 "ctrlr_loss_timeout_sec": 0, 00:20:39.740 "reconnect_delay_sec": 0, 00:20:39.740 "fast_io_fail_timeout_sec": 0, 00:20:39.740 "disable_auto_failback": false, 00:20:39.740 "generate_uuids": false, 00:20:39.740 "transport_tos": 0, 00:20:39.740 "nvme_error_stat": false, 00:20:39.740 "rdma_srq_size": 0, 00:20:39.740 "io_path_stat": false, 00:20:39.740 "allow_accel_sequence": false, 00:20:39.740 "rdma_max_cq_size": 0, 00:20:39.740 "rdma_cm_event_timeout_ms": 0, 00:20:39.740 "dhchap_digests": [ 00:20:39.740 "sha256", 00:20:39.740 "sha384", 00:20:39.740 "sha512" 00:20:39.740 ], 00:20:39.740 "dhchap_dhgroups": [ 00:20:39.740 "null", 00:20:39.740 "ffdhe2048", 00:20:39.740 "ffdhe3072", 00:20:39.740 "ffdhe4096", 00:20:39.740 "ffdhe6144", 00:20:39.740 "ffdhe8192" 00:20:39.740 ] 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "bdev_nvme_set_hotplug", 00:20:39.740 "params": { 00:20:39.740 "period_us": 100000, 00:20:39.740 "enable": false 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "bdev_malloc_create", 00:20:39.740 "params": { 00:20:39.740 "name": "malloc0", 00:20:39.740 "num_blocks": 8192, 00:20:39.740 "block_size": 4096, 00:20:39.740 "physical_block_size": 4096, 00:20:39.740 "uuid": "3ed2fe77-2214-41ec-b4fc-1f7616c203ea", 00:20:39.740 "optimal_io_boundary": 0, 00:20:39.740 "md_size": 0, 00:20:39.740 "dif_type": 0, 00:20:39.740 "dif_is_head_of_md": false, 00:20:39.740 "dif_pi_format": 0 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "bdev_wait_for_examine" 00:20:39.740 } 00:20:39.740 ] 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "subsystem": "nbd", 00:20:39.740 "config": [] 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "subsystem": "scheduler", 00:20:39.740 "config": [ 00:20:39.740 { 00:20:39.740 "method": "framework_set_scheduler", 00:20:39.740 "params": { 00:20:39.740 "name": "static" 00:20:39.740 } 00:20:39.740 } 00:20:39.740 ] 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "subsystem": "nvmf", 00:20:39.740 "config": [ 00:20:39.740 { 00:20:39.740 "method": "nvmf_set_config", 00:20:39.740 "params": { 00:20:39.740 "discovery_filter": "match_any", 00:20:39.740 "admin_cmd_passthru": { 00:20:39.740 "identify_ctrlr": false 00:20:39.740 }, 00:20:39.740 "dhchap_digests": [ 00:20:39.740 "sha256", 00:20:39.740 "sha384", 00:20:39.740 "sha512" 00:20:39.740 ], 00:20:39.740 "dhchap_dhgroups": [ 00:20:39.740 "null", 00:20:39.740 "ffdhe2048", 00:20:39.740 "ffdhe3072", 00:20:39.740 "ffdhe4096", 00:20:39.740 "ffdhe6144", 00:20:39.740 "ffdhe8192" 00:20:39.740 ] 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "nvmf_set_max_subsystems", 00:20:39.740 "params": { 00:20:39.740 "max_subsystems": 1024 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "nvmf_set_crdt", 00:20:39.740 "params": { 00:20:39.740 "crdt1": 0, 00:20:39.740 "crdt2": 0, 00:20:39.740 "crdt3": 0 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "nvmf_create_transport", 00:20:39.740 "params": { 00:20:39.740 "trtype": "TCP", 00:20:39.740 "max_queue_depth": 128, 00:20:39.740 "max_io_qpairs_per_ctrlr": 127, 00:20:39.740 "in_capsule_data_size": 4096, 00:20:39.740 "max_io_size": 131072, 00:20:39.740 "io_unit_size": 131072, 00:20:39.740 "max_aq_depth": 128, 00:20:39.740 "num_shared_buffers": 511, 00:20:39.740 "buf_cache_size": 4294967295, 00:20:39.740 "dif_insert_or_strip": false, 00:20:39.740 "zcopy": false, 00:20:39.740 "c2h_success": false, 00:20:39.740 "sock_priority": 0, 00:20:39.740 "abort_timeout_sec": 1, 00:20:39.740 "ack_timeout": 0, 00:20:39.740 "data_wr_pool_size": 0 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "nvmf_create_subsystem", 00:20:39.740 "params": { 00:20:39.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.740 "allow_any_host": false, 00:20:39.740 "serial_number": "00000000000000000000", 00:20:39.740 "model_number": "SPDK bdev Controller", 00:20:39.740 "max_namespaces": 32, 00:20:39.740 "min_cntlid": 1, 00:20:39.740 "max_cntlid": 65519, 00:20:39.740 "ana_reporting": false 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "nvmf_subsystem_add_host", 00:20:39.740 "params": { 00:20:39.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.740 "host": "nqn.2016-06.io.spdk:host1", 00:20:39.740 "psk": "key0" 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "nvmf_subsystem_add_ns", 00:20:39.740 "params": { 00:20:39.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.740 "namespace": { 00:20:39.740 "nsid": 1, 00:20:39.740 "bdev_name": "malloc0", 00:20:39.740 "nguid": "3ED2FE77221441ECB4FC1F7616C203EA", 00:20:39.740 "uuid": "3ed2fe77-2214-41ec-b4fc-1f7616c203ea", 00:20:39.740 "no_auto_visible": false 00:20:39.740 } 00:20:39.740 } 00:20:39.740 }, 00:20:39.740 { 00:20:39.740 "method": "nvmf_subsystem_add_listener", 00:20:39.740 "params": { 00:20:39.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.740 "listen_address": { 00:20:39.740 "trtype": "TCP", 00:20:39.740 "adrfam": "IPv4", 00:20:39.740 "traddr": "10.0.0.2", 00:20:39.740 "trsvcid": "4420" 00:20:39.740 }, 00:20:39.740 "secure_channel": false, 00:20:39.740 "sock_impl": "ssl" 00:20:39.740 } 00:20:39.740 } 00:20:39.740 ] 00:20:39.740 } 00:20:39.740 ] 00:20:39.740 }' 00:20:39.740 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.740 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1216907 00:20:39.740 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:39.740 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1216907 00:20:39.740 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1216907 ']' 00:20:39.740 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.740 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.740 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.740 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.740 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.002 [2024-12-10 05:45:27.649130] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:40.002 [2024-12-10 05:45:27.649185] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.002 [2024-12-10 05:45:27.723847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.002 [2024-12-10 05:45:27.762833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.002 [2024-12-10 05:45:27.762868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.002 [2024-12-10 05:45:27.762875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.002 [2024-12-10 05:45:27.762881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.002 [2024-12-10 05:45:27.762886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.002 [2024-12-10 05:45:27.763403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.260 [2024-12-10 05:45:27.976772] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.260 [2024-12-10 05:45:28.008808] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.260 [2024-12-10 05:45:28.009004] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1217142 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1217142 /var/tmp/bdevperf.sock 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1217142 ']' 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.826 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:40.826 "subsystems": [ 00:20:40.826 { 00:20:40.826 "subsystem": "keyring", 00:20:40.826 "config": [ 00:20:40.826 { 00:20:40.826 "method": "keyring_file_add_key", 00:20:40.826 "params": { 00:20:40.826 "name": "key0", 00:20:40.826 "path": "/tmp/tmp.nl7cTjfzQg" 00:20:40.826 } 00:20:40.826 } 00:20:40.826 ] 00:20:40.826 }, 00:20:40.826 { 00:20:40.826 "subsystem": "iobuf", 00:20:40.826 "config": [ 00:20:40.826 { 00:20:40.826 "method": "iobuf_set_options", 00:20:40.826 "params": { 00:20:40.826 "small_pool_count": 8192, 00:20:40.826 "large_pool_count": 1024, 00:20:40.826 "small_bufsize": 8192, 00:20:40.826 "large_bufsize": 135168, 00:20:40.826 "enable_numa": false 00:20:40.826 } 00:20:40.826 } 00:20:40.826 ] 00:20:40.826 }, 00:20:40.826 { 00:20:40.826 "subsystem": "sock", 00:20:40.826 "config": [ 00:20:40.826 { 00:20:40.826 "method": "sock_set_default_impl", 00:20:40.826 "params": { 00:20:40.826 "impl_name": "posix" 00:20:40.826 } 00:20:40.826 }, 00:20:40.826 { 00:20:40.826 "method": "sock_impl_set_options", 00:20:40.826 "params": { 00:20:40.826 "impl_name": "ssl", 00:20:40.826 "recv_buf_size": 4096, 00:20:40.826 "send_buf_size": 4096, 00:20:40.826 "enable_recv_pipe": true, 00:20:40.826 "enable_quickack": false, 00:20:40.826 "enable_placement_id": 0, 00:20:40.826 "enable_zerocopy_send_server": true, 00:20:40.826 "enable_zerocopy_send_client": false, 00:20:40.826 "zerocopy_threshold": 0, 00:20:40.826 "tls_version": 0, 00:20:40.826 "enable_ktls": false 00:20:40.826 } 00:20:40.826 }, 00:20:40.826 { 00:20:40.826 "method": "sock_impl_set_options", 00:20:40.826 "params": { 00:20:40.826 "impl_name": "posix", 00:20:40.827 "recv_buf_size": 2097152, 00:20:40.827 "send_buf_size": 2097152, 00:20:40.827 "enable_recv_pipe": true, 00:20:40.827 "enable_quickack": false, 00:20:40.827 "enable_placement_id": 0, 00:20:40.827 "enable_zerocopy_send_server": true, 00:20:40.827 "enable_zerocopy_send_client": false, 00:20:40.827 "zerocopy_threshold": 0, 00:20:40.827 "tls_version": 0, 00:20:40.827 "enable_ktls": false 00:20:40.827 } 00:20:40.827 } 00:20:40.827 ] 00:20:40.827 }, 00:20:40.827 { 00:20:40.827 "subsystem": "vmd", 00:20:40.827 "config": [] 00:20:40.827 }, 00:20:40.827 { 00:20:40.827 "subsystem": "accel", 00:20:40.827 "config": [ 00:20:40.827 { 00:20:40.827 "method": "accel_set_options", 00:20:40.827 "params": { 00:20:40.827 "small_cache_size": 128, 00:20:40.827 "large_cache_size": 16, 00:20:40.827 "task_count": 2048, 00:20:40.827 "sequence_count": 2048, 00:20:40.827 "buf_count": 2048 00:20:40.827 } 00:20:40.827 } 00:20:40.827 ] 00:20:40.827 }, 00:20:40.827 { 00:20:40.827 "subsystem": "bdev", 00:20:40.827 "config": [ 00:20:40.827 { 00:20:40.827 "method": "bdev_set_options", 00:20:40.827 "params": { 00:20:40.827 "bdev_io_pool_size": 65535, 00:20:40.827 "bdev_io_cache_size": 256, 00:20:40.827 "bdev_auto_examine": true, 00:20:40.827 "iobuf_small_cache_size": 128, 00:20:40.827 "iobuf_large_cache_size": 16 00:20:40.827 } 00:20:40.827 }, 00:20:40.827 { 00:20:40.827 "method": "bdev_raid_set_options", 00:20:40.827 "params": { 00:20:40.827 "process_window_size_kb": 1024, 00:20:40.827 "process_max_bandwidth_mb_sec": 0 00:20:40.827 } 00:20:40.827 }, 00:20:40.827 { 00:20:40.827 "method": "bdev_iscsi_set_options", 00:20:40.827 "params": { 00:20:40.827 "timeout_sec": 30 00:20:40.827 } 00:20:40.827 }, 00:20:40.827 { 00:20:40.827 "method": "bdev_nvme_set_options", 00:20:40.827 "params": { 00:20:40.827 "action_on_timeout": "none", 00:20:40.827 "timeout_us": 0, 00:20:40.827 "timeout_admin_us": 0, 00:20:40.827 "keep_alive_timeout_ms": 10000, 00:20:40.827 "arbitration_burst": 0, 00:20:40.827 "low_priority_weight": 0, 00:20:40.827 "medium_priority_weight": 0, 00:20:40.827 "high_priority_weight": 0, 00:20:40.827 "nvme_adminq_poll_period_us": 10000, 00:20:40.827 "nvme_ioq_poll_period_us": 0, 00:20:40.827 "io_queue_requests": 512, 00:20:40.827 "delay_cmd_submit": true, 00:20:40.827 "transport_retry_count": 4, 00:20:40.827 "bdev_retry_count": 3, 00:20:40.827 "transport_ack_timeout": 0, 00:20:40.827 "ctrlr_loss_timeout_sec": 0, 00:20:40.827 "reconnect_delay_sec": 0, 00:20:40.827 "fast_io_fail_timeout_sec": 0, 00:20:40.827 "disable_auto_failback": false, 00:20:40.827 "generate_uuids": false, 00:20:40.827 "transport_tos": 0, 00:20:40.827 "nvme_error_stat": false, 00:20:40.827 "rdma_srq_size": 0, 00:20:40.827 "io_path_stat": false, 00:20:40.827 "allow_accel_sequence": false, 00:20:40.827 "rdma_max_cq_size": 0, 00:20:40.827 "rdma_cm_event_timeout_ms": 0, 00:20:40.827 "dhchap_digests": [ 00:20:40.827 "sha256", 00:20:40.827 "sha384", 00:20:40.827 "sha512" 00:20:40.827 ], 00:20:40.827 "dhchap_dhgroups": [ 00:20:40.827 "null", 00:20:40.827 "ffdhe2048", 00:20:40.827 "ffdhe3072", 00:20:40.827 "ffdhe4096", 00:20:40.827 "ffdhe6144", 00:20:40.827 "ffdhe8192" 00:20:40.827 ] 00:20:40.827 } 00:20:40.827 }, 00:20:40.827 { 00:20:40.827 "method": "bdev_nvme_attach_controller", 00:20:40.827 "params": { 00:20:40.827 "name": "nvme0", 00:20:40.827 "trtype": "TCP", 00:20:40.827 "adrfam": "IPv4", 00:20:40.827 "traddr": "10.0.0.2", 00:20:40.827 "trsvcid": "4420", 00:20:40.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.827 "prchk_reftag": false, 00:20:40.827 "prchk_guard": false, 00:20:40.827 "ctrlr_loss_timeout_sec": 0, 00:20:40.827 "reconnect_delay_sec": 0, 00:20:40.827 "fast_io_fail_timeout_sec": 0, 00:20:40.827 "psk": "key0", 00:20:40.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.827 "hdgst": false, 00:20:40.827 "ddgst": false, 00:20:40.827 "multipath": "multipath" 00:20:40.827 } 00:20:40.827 }, 00:20:40.827 { 00:20:40.827 "method": "bdev_nvme_set_hotplug", 00:20:40.827 "params": { 00:20:40.827 "period_us": 100000, 00:20:40.827 "enable": false 00:20:40.827 } 00:20:40.827 }, 00:20:40.827 { 00:20:40.827 "method": "bdev_enable_histogram", 00:20:40.827 "params": { 00:20:40.827 "name": "nvme0n1", 00:20:40.827 "enable": true 00:20:40.827 } 00:20:40.827 }, 00:20:40.827 { 00:20:40.827 "method": "bdev_wait_for_examine" 00:20:40.827 } 00:20:40.827 ] 00:20:40.827 }, 00:20:40.827 { 00:20:40.827 "subsystem": "nbd", 00:20:40.827 "config": [] 00:20:40.827 } 00:20:40.827 ] 00:20:40.827 }' 00:20:40.827 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.827 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.827 [2024-12-10 05:45:28.562237] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:40.827 [2024-12-10 05:45:28.562283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217142 ] 00:20:40.827 [2024-12-10 05:45:28.634152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.827 [2024-12-10 05:45:28.672852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.085 [2024-12-10 05:45:28.825176] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.650 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.650 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:41.650 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:41.650 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:41.907 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.908 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:41.908 Running I/O for 1 seconds... 00:20:43.099 5094.00 IOPS, 19.90 MiB/s 00:20:43.099 Latency(us) 00:20:43.099 [2024-12-10T04:45:30.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.099 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:43.099 Verification LBA range: start 0x0 length 0x2000 00:20:43.099 nvme0n1 : 1.02 5125.32 20.02 0.00 0.00 24751.99 6085.49 25590.25 00:20:43.099 [2024-12-10T04:45:30.995Z] =================================================================================================================== 00:20:43.099 [2024-12-10T04:45:30.995Z] Total : 5125.32 20.02 0.00 0.00 24751.99 6085.49 25590.25 00:20:43.099 { 00:20:43.099 "results": [ 00:20:43.099 { 00:20:43.099 "job": "nvme0n1", 00:20:43.099 "core_mask": "0x2", 00:20:43.099 "workload": "verify", 00:20:43.099 "status": "finished", 00:20:43.099 "verify_range": { 00:20:43.099 "start": 0, 00:20:43.099 "length": 8192 00:20:43.099 }, 00:20:43.099 "queue_depth": 128, 00:20:43.099 "io_size": 4096, 00:20:43.099 "runtime": 1.019058, 00:20:43.099 "iops": 5125.3216205554545, 00:20:43.099 "mibps": 20.020787580294744, 00:20:43.099 "io_failed": 0, 00:20:43.099 "io_timeout": 0, 00:20:43.099 "avg_latency_us": 24751.988744290364, 00:20:43.099 "min_latency_us": 6085.4857142857145, 00:20:43.099 "max_latency_us": 25590.24761904762 00:20:43.099 } 00:20:43.099 ], 00:20:43.099 "core_count": 1 00:20:43.099 } 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:43.099 nvmf_trace.0 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1217142 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1217142 ']' 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1217142 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217142 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217142' 00:20:43.099 killing process with pid 1217142 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1217142 00:20:43.099 Received shutdown signal, test time was about 1.000000 seconds 00:20:43.099 00:20:43.099 Latency(us) 00:20:43.099 [2024-12-10T04:45:30.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.099 [2024-12-10T04:45:30.995Z] =================================================================================================================== 00:20:43.099 [2024-12-10T04:45:30.995Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.099 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1217142 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.358 rmmod nvme_tcp 00:20:43.358 rmmod nvme_fabrics 00:20:43.358 rmmod nvme_keyring 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1216907 ']' 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1216907 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1216907 ']' 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1216907 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1216907 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1216907' 00:20:43.358 killing process with pid 1216907 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1216907 00:20:43.358 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1216907 00:20:43.616 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:43.617 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:43.617 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:43.617 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:43.617 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:43.617 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:43.617 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:43.617 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:43.617 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:43.617 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.617 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.617 05:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.519 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:45.519 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.JMTTA5jO7P /tmp/tmp.ezyLvz7kA1 /tmp/tmp.nl7cTjfzQg 00:20:45.778 00:20:45.778 real 1m21.111s 00:20:45.778 user 2m4.490s 00:20:45.778 sys 0m30.256s 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.778 ************************************ 00:20:45.778 END TEST nvmf_tls 00:20:45.778 ************************************ 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:45.778 ************************************ 00:20:45.778 START TEST nvmf_fips 00:20:45.778 ************************************ 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:45.778 * Looking for test storage... 00:20:45.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:45.778 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:45.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.779 --rc genhtml_branch_coverage=1 00:20:45.779 --rc genhtml_function_coverage=1 00:20:45.779 --rc genhtml_legend=1 00:20:45.779 --rc geninfo_all_blocks=1 00:20:45.779 --rc geninfo_unexecuted_blocks=1 00:20:45.779 00:20:45.779 ' 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:45.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.779 --rc genhtml_branch_coverage=1 00:20:45.779 --rc genhtml_function_coverage=1 00:20:45.779 --rc genhtml_legend=1 00:20:45.779 --rc geninfo_all_blocks=1 00:20:45.779 --rc geninfo_unexecuted_blocks=1 00:20:45.779 00:20:45.779 ' 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:45.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.779 --rc genhtml_branch_coverage=1 00:20:45.779 --rc genhtml_function_coverage=1 00:20:45.779 --rc genhtml_legend=1 00:20:45.779 --rc geninfo_all_blocks=1 00:20:45.779 --rc geninfo_unexecuted_blocks=1 00:20:45.779 00:20:45.779 ' 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:45.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.779 --rc genhtml_branch_coverage=1 00:20:45.779 --rc genhtml_function_coverage=1 00:20:45.779 --rc genhtml_legend=1 00:20:45.779 --rc geninfo_all_blocks=1 00:20:45.779 --rc geninfo_unexecuted_blocks=1 00:20:45.779 00:20:45.779 ' 00:20:45.779 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:46.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.041 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:46.042 Error setting digest 00:20:46.042 40D294A6DA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:46.042 40D294A6DA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:46.042 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:52.610 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:52.610 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.610 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:52.611 Found net devices under 0000:af:00.0: cvl_0_0 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:52.611 Found net devices under 0000:af:00.1: cvl_0_1 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:20:52.611 00:20:52.611 --- 10.0.0.2 ping statistics --- 00:20:52.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.611 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:20:52.611 00:20:52.611 --- 10.0.0.1 ping statistics --- 00:20:52.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.611 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1221090 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1221090 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1221090 ']' 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.611 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:52.611 [2024-12-10 05:45:39.942236] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:52.611 [2024-12-10 05:45:39.942285] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.611 [2024-12-10 05:45:40.021091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.611 [2024-12-10 05:45:40.067702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.611 [2024-12-10 05:45:40.067739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.611 [2024-12-10 05:45:40.067746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.611 [2024-12-10 05:45:40.067753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.611 [2024-12-10 05:45:40.067758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.611 [2024-12-10 05:45:40.068270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Ftp 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Ftp 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Ftp 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Ftp 00:20:53.177 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:53.177 [2024-12-10 05:45:40.979035] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.177 [2024-12-10 05:45:40.995050] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.177 [2024-12-10 05:45:40.995258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.177 malloc0 00:20:53.177 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.177 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1221331 00:20:53.177 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.177 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1221331 /var/tmp/bdevperf.sock 00:20:53.177 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1221331 ']' 00:20:53.177 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.177 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.177 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.177 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.177 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.435 [2024-12-10 05:45:41.122189] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:20:53.435 [2024-12-10 05:45:41.122240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221331 ] 00:20:53.435 [2024-12-10 05:45:41.195365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.435 [2024-12-10 05:45:41.234761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.691 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.691 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:53.691 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Ftp 00:20:53.691 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:53.947 [2024-12-10 05:45:41.695273] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:53.947 TLSTESTn1 00:20:53.947 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:54.204 Running I/O for 10 seconds... 00:20:56.072 5248.00 IOPS, 20.50 MiB/s [2024-12-10T04:45:44.902Z] 5442.50 IOPS, 21.26 MiB/s [2024-12-10T04:45:46.276Z] 5482.00 IOPS, 21.41 MiB/s [2024-12-10T04:45:47.210Z] 5536.00 IOPS, 21.62 MiB/s [2024-12-10T04:45:48.144Z] 5529.80 IOPS, 21.60 MiB/s [2024-12-10T04:45:49.078Z] 5535.67 IOPS, 21.62 MiB/s [2024-12-10T04:45:50.011Z] 5532.57 IOPS, 21.61 MiB/s [2024-12-10T04:45:50.944Z] 5541.62 IOPS, 21.65 MiB/s [2024-12-10T04:45:52.314Z] 5527.22 IOPS, 21.59 MiB/s [2024-12-10T04:45:52.314Z] 5534.80 IOPS, 21.62 MiB/s 00:21:04.418 Latency(us) 00:21:04.418 [2024-12-10T04:45:52.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.418 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:04.418 Verification LBA range: start 0x0 length 0x2000 00:21:04.418 TLSTESTn1 : 10.01 5540.50 21.64 0.00 0.00 23069.07 5211.67 32955.25 00:21:04.418 [2024-12-10T04:45:52.314Z] =================================================================================================================== 00:21:04.418 [2024-12-10T04:45:52.314Z] Total : 5540.50 21.64 0.00 0.00 23069.07 5211.67 32955.25 00:21:04.418 { 00:21:04.418 "results": [ 00:21:04.418 { 00:21:04.418 "job": "TLSTESTn1", 00:21:04.418 "core_mask": "0x4", 00:21:04.418 "workload": "verify", 00:21:04.418 "status": "finished", 00:21:04.418 "verify_range": { 00:21:04.418 "start": 0, 00:21:04.418 "length": 8192 00:21:04.418 }, 00:21:04.418 "queue_depth": 128, 00:21:04.418 "io_size": 4096, 00:21:04.418 "runtime": 10.012642, 00:21:04.418 "iops": 5540.495705329323, 00:21:04.418 "mibps": 21.642561348942667, 00:21:04.418 "io_failed": 0, 00:21:04.418 "io_timeout": 0, 00:21:04.418 "avg_latency_us": 23069.070568690313, 00:21:04.418 "min_latency_us": 5211.672380952381, 00:21:04.418 "max_latency_us": 32955.24571428572 00:21:04.418 } 00:21:04.418 ], 00:21:04.418 "core_count": 1 00:21:04.418 } 00:21:04.418 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:04.418 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:04.418 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:04.418 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:04.419 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:04.419 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:04.419 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:04.419 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:04.419 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:04.419 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:04.419 nvmf_trace.0 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1221331 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1221331 ']' 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1221331 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1221331 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1221331' 00:21:04.419 killing process with pid 1221331 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1221331 00:21:04.419 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.419 00:21:04.419 Latency(us) 00:21:04.419 [2024-12-10T04:45:52.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.419 [2024-12-10T04:45:52.315Z] =================================================================================================================== 00:21:04.419 [2024-12-10T04:45:52.315Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1221331 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:04.419 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:04.419 rmmod nvme_tcp 00:21:04.419 rmmod nvme_fabrics 00:21:04.419 rmmod nvme_keyring 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1221090 ']' 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1221090 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1221090 ']' 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1221090 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1221090 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1221090' 00:21:04.678 killing process with pid 1221090 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1221090 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1221090 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.678 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.211 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:07.211 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Ftp 00:21:07.211 00:21:07.211 real 0m21.122s 00:21:07.211 user 0m22.097s 00:21:07.211 sys 0m9.580s 00:21:07.211 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.211 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:07.211 ************************************ 00:21:07.211 END TEST nvmf_fips 00:21:07.211 ************************************ 00:21:07.211 05:45:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:07.212 ************************************ 00:21:07.212 START TEST nvmf_control_msg_list 00:21:07.212 ************************************ 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:07.212 * Looking for test storage... 00:21:07.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:07.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.212 --rc genhtml_branch_coverage=1 00:21:07.212 --rc genhtml_function_coverage=1 00:21:07.212 --rc genhtml_legend=1 00:21:07.212 --rc geninfo_all_blocks=1 00:21:07.212 --rc geninfo_unexecuted_blocks=1 00:21:07.212 00:21:07.212 ' 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:07.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.212 --rc genhtml_branch_coverage=1 00:21:07.212 --rc genhtml_function_coverage=1 00:21:07.212 --rc genhtml_legend=1 00:21:07.212 --rc geninfo_all_blocks=1 00:21:07.212 --rc geninfo_unexecuted_blocks=1 00:21:07.212 00:21:07.212 ' 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:07.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.212 --rc genhtml_branch_coverage=1 00:21:07.212 --rc genhtml_function_coverage=1 00:21:07.212 --rc genhtml_legend=1 00:21:07.212 --rc geninfo_all_blocks=1 00:21:07.212 --rc geninfo_unexecuted_blocks=1 00:21:07.212 00:21:07.212 ' 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:07.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.212 --rc genhtml_branch_coverage=1 00:21:07.212 --rc genhtml_function_coverage=1 00:21:07.212 --rc genhtml_legend=1 00:21:07.212 --rc geninfo_all_blocks=1 00:21:07.212 --rc geninfo_unexecuted_blocks=1 00:21:07.212 00:21:07.212 ' 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.212 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:07.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:07.213 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.778 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:13.779 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:13.779 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:13.779 Found net devices under 0000:af:00.0: cvl_0_0 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:13.779 Found net devices under 0000:af:00.1: cvl_0_1 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:13.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:21:13.779 00:21:13.779 --- 10.0.0.2 ping statistics --- 00:21:13.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.779 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:21:13.779 00:21:13.779 --- 10.0.0.1 ping statistics --- 00:21:13.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.779 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1226591 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1226591 00:21:13.779 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1226591 ']' 00:21:13.780 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.780 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.780 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.780 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.780 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.780 [2024-12-10 05:46:00.849853] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:21:13.780 [2024-12-10 05:46:00.849899] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.780 [2024-12-10 05:46:00.927201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.780 [2024-12-10 05:46:00.965366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.780 [2024-12-10 05:46:00.965402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.780 [2024-12-10 05:46:00.965411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.780 [2024-12-10 05:46:00.965418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.780 [2024-12-10 05:46:00.965423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.780 [2024-12-10 05:46:00.965926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.780 [2024-12-10 05:46:01.110404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.780 Malloc0 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.780 [2024-12-10 05:46:01.150729] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1226616 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1226617 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1226618 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:13.780 05:46:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1226616 00:21:13.780 [2024-12-10 05:46:01.229191] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:13.780 [2024-12-10 05:46:01.239223] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:13.780 [2024-12-10 05:46:01.249104] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:14.711 Initializing NVMe Controllers 00:21:14.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:14.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:14.711 Initialization complete. Launching workers. 00:21:14.711 ======================================================== 00:21:14.711 Latency(us) 00:21:14.711 Device Information : IOPS MiB/s Average min max 00:21:14.711 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41075.90 40806.13 41925.77 00:21:14.711 ======================================================== 00:21:14.711 Total : 25.00 0.10 41075.90 40806.13 41925.77 00:21:14.711 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1226617 00:21:14.711 Initializing NVMe Controllers 00:21:14.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:14.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:14.711 Initialization complete. Launching workers. 00:21:14.711 ======================================================== 00:21:14.711 Latency(us) 00:21:14.711 Device Information : IOPS MiB/s Average min max 00:21:14.711 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6157.00 24.05 162.06 125.13 41735.04 00:21:14.711 ======================================================== 00:21:14.711 Total : 6157.00 24.05 162.06 125.13 41735.04 00:21:14.711 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1226618 00:21:14.711 Initializing NVMe Controllers 00:21:14.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:14.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:14.711 Initialization complete. Launching workers. 00:21:14.711 ======================================================== 00:21:14.711 Latency(us) 00:21:14.711 Device Information : IOPS MiB/s Average min max 00:21:14.711 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6380.96 24.93 156.37 123.48 394.81 00:21:14.711 ======================================================== 00:21:14.711 Total : 6380.96 24.93 156.37 123.48 394.81 00:21:14.711 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.711 rmmod nvme_tcp 00:21:14.711 rmmod nvme_fabrics 00:21:14.711 rmmod nvme_keyring 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1226591 ']' 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1226591 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1226591 ']' 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1226591 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1226591 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1226591' 00:21:14.711 killing process with pid 1226591 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1226591 00:21:14.711 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1226591 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.970 05:46:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.874 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.874 00:21:16.874 real 0m10.005s 00:21:16.874 user 0m6.411s 00:21:16.874 sys 0m5.366s 00:21:16.874 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.874 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.874 ************************************ 00:21:16.874 END TEST nvmf_control_msg_list 00:21:16.874 ************************************ 00:21:16.874 05:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:16.874 05:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.874 05:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.874 05:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.874 ************************************ 00:21:16.874 START TEST nvmf_wait_for_buf 00:21:16.874 ************************************ 00:21:16.874 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:17.134 * Looking for test storage... 00:21:17.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.134 --rc genhtml_branch_coverage=1 00:21:17.134 --rc genhtml_function_coverage=1 00:21:17.134 --rc genhtml_legend=1 00:21:17.134 --rc geninfo_all_blocks=1 00:21:17.134 --rc geninfo_unexecuted_blocks=1 00:21:17.134 00:21:17.134 ' 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.134 --rc genhtml_branch_coverage=1 00:21:17.134 --rc genhtml_function_coverage=1 00:21:17.134 --rc genhtml_legend=1 00:21:17.134 --rc geninfo_all_blocks=1 00:21:17.134 --rc geninfo_unexecuted_blocks=1 00:21:17.134 00:21:17.134 ' 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.134 --rc genhtml_branch_coverage=1 00:21:17.134 --rc genhtml_function_coverage=1 00:21:17.134 --rc genhtml_legend=1 00:21:17.134 --rc geninfo_all_blocks=1 00:21:17.134 --rc geninfo_unexecuted_blocks=1 00:21:17.134 00:21:17.134 ' 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.134 --rc genhtml_branch_coverage=1 00:21:17.134 --rc genhtml_function_coverage=1 00:21:17.134 --rc genhtml_legend=1 00:21:17.134 --rc geninfo_all_blocks=1 00:21:17.134 --rc geninfo_unexecuted_blocks=1 00:21:17.134 00:21:17.134 ' 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.134 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:17.135 05:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.702 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:23.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:23.703 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:23.703 Found net devices under 0000:af:00.0: cvl_0_0 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:23.703 Found net devices under 0000:af:00.1: cvl_0_1 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:21:23.703 00:21:23.703 --- 10.0.0.2 ping statistics --- 00:21:23.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.703 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:21:23.703 00:21:23.703 --- 10.0.0.1 ping statistics --- 00:21:23.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.703 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1230302 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1230302 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1230302 ']' 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.703 05:46:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.703 [2024-12-10 05:46:11.006323] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:21:23.703 [2024-12-10 05:46:11.006369] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.703 [2024-12-10 05:46:11.084649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.703 [2024-12-10 05:46:11.124284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.703 [2024-12-10 05:46:11.124318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.703 [2024-12-10 05:46:11.124325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.703 [2024-12-10 05:46:11.124331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.704 [2024-12-10 05:46:11.124336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.704 [2024-12-10 05:46:11.124820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.961 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.961 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:23.961 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.961 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.961 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.219 Malloc0 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.219 [2024-12-10 05:46:11.992540] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:24.219 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.220 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.220 05:46:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.220 05:46:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:24.220 05:46:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.220 05:46:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.220 05:46:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.220 05:46:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:24.220 05:46:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.220 05:46:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.220 [2024-12-10 05:46:12.020733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.220 05:46:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.220 05:46:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:24.220 [2024-12-10 05:46:12.107260] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:26.118 Initializing NVMe Controllers 00:21:26.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:26.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:26.118 Initialization complete. Launching workers. 00:21:26.118 ======================================================== 00:21:26.118 Latency(us) 00:21:26.118 Device Information : IOPS MiB/s Average min max 00:21:26.118 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 89.00 11.12 46728.84 31900.13 191534.35 00:21:26.118 ======================================================== 00:21:26.118 Total : 89.00 11.12 46728.84 31900.13 191534.35 00:21:26.118 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1398 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1398 -eq 0 ]] 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:26.118 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:26.118 rmmod nvme_tcp 00:21:26.118 rmmod nvme_fabrics 00:21:26.119 rmmod nvme_keyring 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1230302 ']' 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1230302 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1230302 ']' 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1230302 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1230302 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1230302' 00:21:26.119 killing process with pid 1230302 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1230302 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1230302 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.119 05:46:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.135 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:28.135 00:21:28.135 real 0m11.216s 00:21:28.135 user 0m4.886s 00:21:28.135 sys 0m4.867s 00:21:28.135 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.135 05:46:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:28.135 ************************************ 00:21:28.135 END TEST nvmf_wait_for_buf 00:21:28.135 ************************************ 00:21:28.135 05:46:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:28.135 05:46:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:28.135 05:46:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:28.135 05:46:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:28.135 05:46:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:28.135 05:46:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:34.702 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:34.702 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:34.702 Found net devices under 0000:af:00.0: cvl_0_0 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:34.702 Found net devices under 0000:af:00.1: cvl_0_1 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.702 ************************************ 00:21:34.702 START TEST nvmf_perf_adq 00:21:34.702 ************************************ 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:34.702 * Looking for test storage... 00:21:34.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:34.702 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:34.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.703 --rc genhtml_branch_coverage=1 00:21:34.703 --rc genhtml_function_coverage=1 00:21:34.703 --rc genhtml_legend=1 00:21:34.703 --rc geninfo_all_blocks=1 00:21:34.703 --rc geninfo_unexecuted_blocks=1 00:21:34.703 00:21:34.703 ' 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:34.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.703 --rc genhtml_branch_coverage=1 00:21:34.703 --rc genhtml_function_coverage=1 00:21:34.703 --rc genhtml_legend=1 00:21:34.703 --rc geninfo_all_blocks=1 00:21:34.703 --rc geninfo_unexecuted_blocks=1 00:21:34.703 00:21:34.703 ' 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:34.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.703 --rc genhtml_branch_coverage=1 00:21:34.703 --rc genhtml_function_coverage=1 00:21:34.703 --rc genhtml_legend=1 00:21:34.703 --rc geninfo_all_blocks=1 00:21:34.703 --rc geninfo_unexecuted_blocks=1 00:21:34.703 00:21:34.703 ' 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:34.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.703 --rc genhtml_branch_coverage=1 00:21:34.703 --rc genhtml_function_coverage=1 00:21:34.703 --rc genhtml_legend=1 00:21:34.703 --rc geninfo_all_blocks=1 00:21:34.703 --rc geninfo_unexecuted_blocks=1 00:21:34.703 00:21:34.703 ' 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.703 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:39.975 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:39.975 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:39.975 Found net devices under 0000:af:00.0: cvl_0_0 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.975 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.976 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:39.976 Found net devices under 0000:af:00.1: cvl_0_1 00:21:39.976 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.976 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:39.976 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.976 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:39.976 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:39.976 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:39.976 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:39.976 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:41.352 05:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:43.902 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:49.172 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:49.172 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:49.172 Found net devices under 0000:af:00.0: cvl_0_0 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:49.172 Found net devices under 0000:af:00.1: cvl_0_1 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.172 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:21:49.173 00:21:49.173 --- 10.0.0.2 ping statistics --- 00:21:49.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.173 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:21:49.173 00:21:49.173 --- 10.0.0.1 ping statistics --- 00:21:49.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.173 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1238823 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1238823 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1238823 ']' 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.173 [2024-12-10 05:46:36.549594] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:21:49.173 [2024-12-10 05:46:36.549640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.173 [2024-12-10 05:46:36.629776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.173 [2024-12-10 05:46:36.669618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.173 [2024-12-10 05:46:36.669658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.173 [2024-12-10 05:46:36.669665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.173 [2024-12-10 05:46:36.669671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.173 [2024-12-10 05:46:36.669677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.173 [2024-12-10 05:46:36.671137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.173 [2024-12-10 05:46:36.671256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.173 [2024-12-10 05:46:36.671293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.173 [2024-12-10 05:46:36.671294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.173 [2024-12-10 05:46:36.881413] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.173 Malloc1 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.173 [2024-12-10 05:46:36.939103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1238998 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:49.173 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:51.069 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:51.069 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.069 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.327 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.327 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:51.327 "tick_rate": 2100000000, 00:21:51.327 "poll_groups": [ 00:21:51.327 { 00:21:51.327 "name": "nvmf_tgt_poll_group_000", 00:21:51.327 "admin_qpairs": 1, 00:21:51.327 "io_qpairs": 1, 00:21:51.327 "current_admin_qpairs": 1, 00:21:51.327 "current_io_qpairs": 1, 00:21:51.327 "pending_bdev_io": 0, 00:21:51.327 "completed_nvme_io": 19491, 00:21:51.327 "transports": [ 00:21:51.327 { 00:21:51.327 "trtype": "TCP" 00:21:51.327 } 00:21:51.327 ] 00:21:51.327 }, 00:21:51.327 { 00:21:51.327 "name": "nvmf_tgt_poll_group_001", 00:21:51.327 "admin_qpairs": 0, 00:21:51.327 "io_qpairs": 1, 00:21:51.327 "current_admin_qpairs": 0, 00:21:51.327 "current_io_qpairs": 1, 00:21:51.327 "pending_bdev_io": 0, 00:21:51.327 "completed_nvme_io": 19515, 00:21:51.327 "transports": [ 00:21:51.327 { 00:21:51.327 "trtype": "TCP" 00:21:51.327 } 00:21:51.327 ] 00:21:51.327 }, 00:21:51.327 { 00:21:51.327 "name": "nvmf_tgt_poll_group_002", 00:21:51.327 "admin_qpairs": 0, 00:21:51.327 "io_qpairs": 1, 00:21:51.327 "current_admin_qpairs": 0, 00:21:51.327 "current_io_qpairs": 1, 00:21:51.327 "pending_bdev_io": 0, 00:21:51.327 "completed_nvme_io": 19906, 00:21:51.327 "transports": [ 00:21:51.327 { 00:21:51.327 "trtype": "TCP" 00:21:51.327 } 00:21:51.327 ] 00:21:51.327 }, 00:21:51.327 { 00:21:51.327 "name": "nvmf_tgt_poll_group_003", 00:21:51.327 "admin_qpairs": 0, 00:21:51.327 "io_qpairs": 1, 00:21:51.327 "current_admin_qpairs": 0, 00:21:51.327 "current_io_qpairs": 1, 00:21:51.327 "pending_bdev_io": 0, 00:21:51.327 "completed_nvme_io": 19268, 00:21:51.327 "transports": [ 00:21:51.327 { 00:21:51.327 "trtype": "TCP" 00:21:51.327 } 00:21:51.327 ] 00:21:51.327 } 00:21:51.327 ] 00:21:51.327 }' 00:21:51.327 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:51.327 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:51.327 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:51.327 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:51.327 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1238998 00:21:59.434 Initializing NVMe Controllers 00:21:59.434 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:59.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:59.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:59.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:59.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:59.434 Initialization complete. Launching workers. 00:21:59.435 ======================================================== 00:21:59.435 Latency(us) 00:21:59.435 Device Information : IOPS MiB/s Average min max 00:21:59.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10309.20 40.27 6207.93 1917.90 10569.70 00:21:59.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10443.40 40.79 6128.16 1883.21 10491.06 00:21:59.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10542.70 41.18 6070.39 2344.50 10541.96 00:21:59.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10433.20 40.75 6134.67 2358.25 10238.85 00:21:59.435 ======================================================== 00:21:59.435 Total : 41728.49 163.00 6134.90 1883.21 10569.70 00:21:59.435 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.435 rmmod nvme_tcp 00:21:59.435 rmmod nvme_fabrics 00:21:59.435 rmmod nvme_keyring 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1238823 ']' 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1238823 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1238823 ']' 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1238823 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1238823 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1238823' 00:21:59.435 killing process with pid 1238823 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1238823 00:21:59.435 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1238823 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.694 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.598 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.856 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:01.856 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:01.856 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:02.793 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:05.326 05:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.594 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:10.594 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:10.595 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:10.595 Found net devices under 0000:af:00.0: cvl_0_0 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:10.595 Found net devices under 0000:af:00.1: cvl_0_1 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.853 ms 00:22:10.595 00:22:10.595 --- 10.0.0.2 ping statistics --- 00:22:10.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.595 rtt min/avg/max/mdev = 0.853/0.853/0.853/0.000 ms 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:22:10.595 00:22:10.595 --- 10.0.0.1 ping statistics --- 00:22:10.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.595 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:10.595 net.core.busy_poll = 1 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:10.595 net.core.busy_read = 1 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:10.595 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1242876 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1242876 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1242876 ']' 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.854 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.854 [2024-12-10 05:46:58.683180] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:22:10.854 [2024-12-10 05:46:58.683224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.113 [2024-12-10 05:46:58.759533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.113 [2024-12-10 05:46:58.799991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.113 [2024-12-10 05:46:58.800026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.113 [2024-12-10 05:46:58.800033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.113 [2024-12-10 05:46:58.800039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.113 [2024-12-10 05:46:58.800044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.113 [2024-12-10 05:46:58.801355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.113 [2024-12-10 05:46:58.801466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.113 [2024-12-10 05:46:58.801574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.113 [2024-12-10 05:46:58.801575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.113 05:46:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.113 [2024-12-10 05:46:58.998242] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.371 Malloc1 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.371 [2024-12-10 05:46:59.063033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1242904 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:11.371 05:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:13.270 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:13.270 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.270 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.270 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.270 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:13.270 "tick_rate": 2100000000, 00:22:13.270 "poll_groups": [ 00:22:13.270 { 00:22:13.270 "name": "nvmf_tgt_poll_group_000", 00:22:13.270 "admin_qpairs": 1, 00:22:13.270 "io_qpairs": 0, 00:22:13.270 "current_admin_qpairs": 1, 00:22:13.270 "current_io_qpairs": 0, 00:22:13.270 "pending_bdev_io": 0, 00:22:13.270 "completed_nvme_io": 0, 00:22:13.270 "transports": [ 00:22:13.270 { 00:22:13.270 "trtype": "TCP" 00:22:13.270 } 00:22:13.270 ] 00:22:13.270 }, 00:22:13.270 { 00:22:13.270 "name": "nvmf_tgt_poll_group_001", 00:22:13.270 "admin_qpairs": 0, 00:22:13.270 "io_qpairs": 4, 00:22:13.270 "current_admin_qpairs": 0, 00:22:13.270 "current_io_qpairs": 4, 00:22:13.270 "pending_bdev_io": 0, 00:22:13.270 "completed_nvme_io": 44075, 00:22:13.270 "transports": [ 00:22:13.270 { 00:22:13.270 "trtype": "TCP" 00:22:13.270 } 00:22:13.270 ] 00:22:13.270 }, 00:22:13.270 { 00:22:13.270 "name": "nvmf_tgt_poll_group_002", 00:22:13.270 "admin_qpairs": 0, 00:22:13.270 "io_qpairs": 0, 00:22:13.270 "current_admin_qpairs": 0, 00:22:13.270 "current_io_qpairs": 0, 00:22:13.270 "pending_bdev_io": 0, 00:22:13.270 "completed_nvme_io": 0, 00:22:13.270 "transports": [ 00:22:13.270 { 00:22:13.270 "trtype": "TCP" 00:22:13.270 } 00:22:13.270 ] 00:22:13.270 }, 00:22:13.270 { 00:22:13.270 "name": "nvmf_tgt_poll_group_003", 00:22:13.270 "admin_qpairs": 0, 00:22:13.270 "io_qpairs": 0, 00:22:13.270 "current_admin_qpairs": 0, 00:22:13.270 "current_io_qpairs": 0, 00:22:13.270 "pending_bdev_io": 0, 00:22:13.270 "completed_nvme_io": 0, 00:22:13.270 "transports": [ 00:22:13.270 { 00:22:13.270 "trtype": "TCP" 00:22:13.270 } 00:22:13.270 ] 00:22:13.270 } 00:22:13.270 ] 00:22:13.270 }' 00:22:13.270 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:13.270 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:13.270 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:22:13.270 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:22:13.270 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1242904 00:22:21.376 Initializing NVMe Controllers 00:22:21.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:21.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:21.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:21.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:21.376 Initialization complete. Launching workers. 00:22:21.376 ======================================================== 00:22:21.376 Latency(us) 00:22:21.376 Device Information : IOPS MiB/s Average min max 00:22:21.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5846.18 22.84 10950.88 1331.53 56571.42 00:22:21.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6076.37 23.74 10545.96 1315.84 57537.50 00:22:21.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5739.08 22.42 11154.42 1467.33 57978.23 00:22:21.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5785.38 22.60 11085.18 1460.49 55250.16 00:22:21.376 ======================================================== 00:22:21.376 Total : 23447.01 91.59 10928.90 1315.84 57978.23 00:22:21.376 00:22:21.376 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:21.376 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.376 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:21.376 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.376 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:21.376 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.376 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.376 rmmod nvme_tcp 00:22:21.635 rmmod nvme_fabrics 00:22:21.635 rmmod nvme_keyring 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1242876 ']' 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1242876 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1242876 ']' 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1242876 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1242876 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1242876' 00:22:21.635 killing process with pid 1242876 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1242876 00:22:21.635 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1242876 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.894 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:25.182 00:22:25.182 real 0m51.026s 00:22:25.182 user 2m43.670s 00:22:25.182 sys 0m10.515s 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.182 ************************************ 00:22:25.182 END TEST nvmf_perf_adq 00:22:25.182 ************************************ 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:25.182 ************************************ 00:22:25.182 START TEST nvmf_shutdown 00:22:25.182 ************************************ 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:25.182 * Looking for test storage... 00:22:25.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.182 --rc genhtml_branch_coverage=1 00:22:25.182 --rc genhtml_function_coverage=1 00:22:25.182 --rc genhtml_legend=1 00:22:25.182 --rc geninfo_all_blocks=1 00:22:25.182 --rc geninfo_unexecuted_blocks=1 00:22:25.182 00:22:25.182 ' 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.182 --rc genhtml_branch_coverage=1 00:22:25.182 --rc genhtml_function_coverage=1 00:22:25.182 --rc genhtml_legend=1 00:22:25.182 --rc geninfo_all_blocks=1 00:22:25.182 --rc geninfo_unexecuted_blocks=1 00:22:25.182 00:22:25.182 ' 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.182 --rc genhtml_branch_coverage=1 00:22:25.182 --rc genhtml_function_coverage=1 00:22:25.182 --rc genhtml_legend=1 00:22:25.182 --rc geninfo_all_blocks=1 00:22:25.182 --rc geninfo_unexecuted_blocks=1 00:22:25.182 00:22:25.182 ' 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.182 --rc genhtml_branch_coverage=1 00:22:25.182 --rc genhtml_function_coverage=1 00:22:25.182 --rc genhtml_legend=1 00:22:25.182 --rc geninfo_all_blocks=1 00:22:25.182 --rc geninfo_unexecuted_blocks=1 00:22:25.182 00:22:25.182 ' 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.182 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:25.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:25.183 ************************************ 00:22:25.183 START TEST nvmf_shutdown_tc1 00:22:25.183 ************************************ 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:25.183 05:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:31.750 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.750 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:31.751 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:31.751 Found net devices under 0000:af:00.0: cvl_0_0 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:31.751 Found net devices under 0000:af:00.1: cvl_0_1 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:22:31.751 00:22:31.751 --- 10.0.0.2 ping statistics --- 00:22:31.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.751 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:22:31.751 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:22:31.751 00:22:31.751 --- 10.0.0.1 ping statistics --- 00:22:31.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.751 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1248456 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1248456 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1248456 ']' 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.751 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.751 [2024-12-10 05:47:19.104365] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:22:31.751 [2024-12-10 05:47:19.104417] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.751 [2024-12-10 05:47:19.182732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.751 [2024-12-10 05:47:19.223145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.751 [2024-12-10 05:47:19.223186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.751 [2024-12-10 05:47:19.223193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.751 [2024-12-10 05:47:19.223199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.751 [2024-12-10 05:47:19.223205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.751 [2024-12-10 05:47:19.224624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.751 [2024-12-10 05:47:19.224731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.751 [2024-12-10 05:47:19.224841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.751 [2024-12-10 05:47:19.224842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.752 [2024-12-10 05:47:19.368804] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.752 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.752 Malloc1 00:22:31.752 [2024-12-10 05:47:19.477736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.752 Malloc2 00:22:31.752 Malloc3 00:22:31.752 Malloc4 00:22:31.752 Malloc5 00:22:32.010 Malloc6 00:22:32.010 Malloc7 00:22:32.010 Malloc8 00:22:32.010 Malloc9 00:22:32.010 Malloc10 00:22:32.010 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.010 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:32.010 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.010 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1248529 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1248529 /var/tmp/bdevperf.sock 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1248529 ']' 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.270 { 00:22:32.270 "params": { 00:22:32.270 "name": "Nvme$subsystem", 00:22:32.270 "trtype": "$TEST_TRANSPORT", 00:22:32.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.270 "adrfam": "ipv4", 00:22:32.270 "trsvcid": "$NVMF_PORT", 00:22:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.270 "hdgst": ${hdgst:-false}, 00:22:32.270 "ddgst": ${ddgst:-false} 00:22:32.270 }, 00:22:32.270 "method": "bdev_nvme_attach_controller" 00:22:32.270 } 00:22:32.270 EOF 00:22:32.270 )") 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.270 { 00:22:32.270 "params": { 00:22:32.270 "name": "Nvme$subsystem", 00:22:32.270 "trtype": "$TEST_TRANSPORT", 00:22:32.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.270 "adrfam": "ipv4", 00:22:32.270 "trsvcid": "$NVMF_PORT", 00:22:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.270 "hdgst": ${hdgst:-false}, 00:22:32.270 "ddgst": ${ddgst:-false} 00:22:32.270 }, 00:22:32.270 "method": "bdev_nvme_attach_controller" 00:22:32.270 } 00:22:32.270 EOF 00:22:32.270 )") 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.270 { 00:22:32.270 "params": { 00:22:32.270 "name": "Nvme$subsystem", 00:22:32.270 "trtype": "$TEST_TRANSPORT", 00:22:32.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.270 "adrfam": "ipv4", 00:22:32.270 "trsvcid": "$NVMF_PORT", 00:22:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.270 "hdgst": ${hdgst:-false}, 00:22:32.270 "ddgst": ${ddgst:-false} 00:22:32.270 }, 00:22:32.270 "method": "bdev_nvme_attach_controller" 00:22:32.270 } 00:22:32.270 EOF 00:22:32.270 )") 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.270 { 00:22:32.270 "params": { 00:22:32.270 "name": "Nvme$subsystem", 00:22:32.270 "trtype": "$TEST_TRANSPORT", 00:22:32.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.270 "adrfam": "ipv4", 00:22:32.270 "trsvcid": "$NVMF_PORT", 00:22:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.270 "hdgst": ${hdgst:-false}, 00:22:32.270 "ddgst": ${ddgst:-false} 00:22:32.270 }, 00:22:32.270 "method": "bdev_nvme_attach_controller" 00:22:32.270 } 00:22:32.270 EOF 00:22:32.270 )") 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.270 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.270 { 00:22:32.270 "params": { 00:22:32.270 "name": "Nvme$subsystem", 00:22:32.270 "trtype": "$TEST_TRANSPORT", 00:22:32.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.270 "adrfam": "ipv4", 00:22:32.270 "trsvcid": "$NVMF_PORT", 00:22:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.271 "hdgst": ${hdgst:-false}, 00:22:32.271 "ddgst": ${ddgst:-false} 00:22:32.271 }, 00:22:32.271 "method": "bdev_nvme_attach_controller" 00:22:32.271 } 00:22:32.271 EOF 00:22:32.271 )") 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.271 { 00:22:32.271 "params": { 00:22:32.271 "name": "Nvme$subsystem", 00:22:32.271 "trtype": "$TEST_TRANSPORT", 00:22:32.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.271 "adrfam": "ipv4", 00:22:32.271 "trsvcid": "$NVMF_PORT", 00:22:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.271 "hdgst": ${hdgst:-false}, 00:22:32.271 "ddgst": ${ddgst:-false} 00:22:32.271 }, 00:22:32.271 "method": "bdev_nvme_attach_controller" 00:22:32.271 } 00:22:32.271 EOF 00:22:32.271 )") 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.271 { 00:22:32.271 "params": { 00:22:32.271 "name": "Nvme$subsystem", 00:22:32.271 "trtype": "$TEST_TRANSPORT", 00:22:32.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.271 "adrfam": "ipv4", 00:22:32.271 "trsvcid": "$NVMF_PORT", 00:22:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.271 "hdgst": ${hdgst:-false}, 00:22:32.271 "ddgst": ${ddgst:-false} 00:22:32.271 }, 00:22:32.271 "method": "bdev_nvme_attach_controller" 00:22:32.271 } 00:22:32.271 EOF 00:22:32.271 )") 00:22:32.271 [2024-12-10 05:47:19.947018] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:22:32.271 [2024-12-10 05:47:19.947070] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.271 { 00:22:32.271 "params": { 00:22:32.271 "name": "Nvme$subsystem", 00:22:32.271 "trtype": "$TEST_TRANSPORT", 00:22:32.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.271 "adrfam": "ipv4", 00:22:32.271 "trsvcid": "$NVMF_PORT", 00:22:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.271 "hdgst": ${hdgst:-false}, 00:22:32.271 "ddgst": ${ddgst:-false} 00:22:32.271 }, 00:22:32.271 "method": "bdev_nvme_attach_controller" 00:22:32.271 } 00:22:32.271 EOF 00:22:32.271 )") 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.271 { 00:22:32.271 "params": { 00:22:32.271 "name": "Nvme$subsystem", 00:22:32.271 "trtype": "$TEST_TRANSPORT", 00:22:32.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.271 "adrfam": "ipv4", 00:22:32.271 "trsvcid": "$NVMF_PORT", 00:22:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.271 "hdgst": ${hdgst:-false}, 00:22:32.271 "ddgst": ${ddgst:-false} 00:22:32.271 }, 00:22:32.271 "method": "bdev_nvme_attach_controller" 00:22:32.271 } 00:22:32.271 EOF 00:22:32.271 )") 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.271 { 00:22:32.271 "params": { 00:22:32.271 "name": "Nvme$subsystem", 00:22:32.271 "trtype": "$TEST_TRANSPORT", 00:22:32.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.271 "adrfam": "ipv4", 00:22:32.271 "trsvcid": "$NVMF_PORT", 00:22:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.271 "hdgst": ${hdgst:-false}, 00:22:32.271 "ddgst": ${ddgst:-false} 00:22:32.271 }, 00:22:32.271 "method": "bdev_nvme_attach_controller" 00:22:32.271 } 00:22:32.271 EOF 00:22:32.271 )") 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:32.271 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:32.271 "params": { 00:22:32.271 "name": "Nvme1", 00:22:32.271 "trtype": "tcp", 00:22:32.271 "traddr": "10.0.0.2", 00:22:32.271 "adrfam": "ipv4", 00:22:32.271 "trsvcid": "4420", 00:22:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.271 "hdgst": false, 00:22:32.271 "ddgst": false 00:22:32.271 }, 00:22:32.271 "method": "bdev_nvme_attach_controller" 00:22:32.271 },{ 00:22:32.271 "params": { 00:22:32.271 "name": "Nvme2", 00:22:32.271 "trtype": "tcp", 00:22:32.271 "traddr": "10.0.0.2", 00:22:32.271 "adrfam": "ipv4", 00:22:32.271 "trsvcid": "4420", 00:22:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:32.271 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:32.271 "hdgst": false, 00:22:32.271 "ddgst": false 00:22:32.271 }, 00:22:32.271 "method": "bdev_nvme_attach_controller" 00:22:32.271 },{ 00:22:32.271 "params": { 00:22:32.271 "name": "Nvme3", 00:22:32.271 "trtype": "tcp", 00:22:32.271 "traddr": "10.0.0.2", 00:22:32.271 "adrfam": "ipv4", 00:22:32.271 "trsvcid": "4420", 00:22:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:32.271 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:32.271 "hdgst": false, 00:22:32.271 "ddgst": false 00:22:32.271 }, 00:22:32.271 "method": "bdev_nvme_attach_controller" 00:22:32.271 },{ 00:22:32.271 "params": { 00:22:32.271 "name": "Nvme4", 00:22:32.271 "trtype": "tcp", 00:22:32.271 "traddr": "10.0.0.2", 00:22:32.271 "adrfam": "ipv4", 00:22:32.271 "trsvcid": "4420", 00:22:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:32.271 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:32.271 "hdgst": false, 00:22:32.271 "ddgst": false 00:22:32.271 }, 00:22:32.271 "method": "bdev_nvme_attach_controller" 00:22:32.271 },{ 00:22:32.271 "params": { 00:22:32.271 "name": "Nvme5", 00:22:32.271 "trtype": "tcp", 00:22:32.271 "traddr": "10.0.0.2", 00:22:32.271 "adrfam": "ipv4", 00:22:32.271 "trsvcid": "4420", 00:22:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:32.271 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:32.271 "hdgst": false, 00:22:32.271 "ddgst": false 00:22:32.272 }, 00:22:32.272 "method": "bdev_nvme_attach_controller" 00:22:32.272 },{ 00:22:32.272 "params": { 00:22:32.272 "name": "Nvme6", 00:22:32.272 "trtype": "tcp", 00:22:32.272 "traddr": "10.0.0.2", 00:22:32.272 "adrfam": "ipv4", 00:22:32.272 "trsvcid": "4420", 00:22:32.272 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:32.272 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:32.272 "hdgst": false, 00:22:32.272 "ddgst": false 00:22:32.272 }, 00:22:32.272 "method": "bdev_nvme_attach_controller" 00:22:32.272 },{ 00:22:32.272 "params": { 00:22:32.272 "name": "Nvme7", 00:22:32.272 "trtype": "tcp", 00:22:32.272 "traddr": "10.0.0.2", 00:22:32.272 "adrfam": "ipv4", 00:22:32.272 "trsvcid": "4420", 00:22:32.272 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:32.272 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:32.272 "hdgst": false, 00:22:32.272 "ddgst": false 00:22:32.272 }, 00:22:32.272 "method": "bdev_nvme_attach_controller" 00:22:32.272 },{ 00:22:32.272 "params": { 00:22:32.272 "name": "Nvme8", 00:22:32.272 "trtype": "tcp", 00:22:32.272 "traddr": "10.0.0.2", 00:22:32.272 "adrfam": "ipv4", 00:22:32.272 "trsvcid": "4420", 00:22:32.272 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:32.272 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:32.272 "hdgst": false, 00:22:32.272 "ddgst": false 00:22:32.272 }, 00:22:32.272 "method": "bdev_nvme_attach_controller" 00:22:32.272 },{ 00:22:32.272 "params": { 00:22:32.272 "name": "Nvme9", 00:22:32.272 "trtype": "tcp", 00:22:32.272 "traddr": "10.0.0.2", 00:22:32.272 "adrfam": "ipv4", 00:22:32.272 "trsvcid": "4420", 00:22:32.272 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:32.272 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:32.272 "hdgst": false, 00:22:32.272 "ddgst": false 00:22:32.272 }, 00:22:32.272 "method": "bdev_nvme_attach_controller" 00:22:32.272 },{ 00:22:32.272 "params": { 00:22:32.272 "name": "Nvme10", 00:22:32.272 "trtype": "tcp", 00:22:32.272 "traddr": "10.0.0.2", 00:22:32.272 "adrfam": "ipv4", 00:22:32.272 "trsvcid": "4420", 00:22:32.272 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:32.272 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:32.272 "hdgst": false, 00:22:32.272 "ddgst": false 00:22:32.272 }, 00:22:32.272 "method": "bdev_nvme_attach_controller" 00:22:32.272 }' 00:22:32.272 [2024-12-10 05:47:20.026667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.272 [2024-12-10 05:47:20.072070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.647 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.647 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:33.647 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:33.647 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.647 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.647 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.647 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1248529 00:22:33.647 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:33.647 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:34.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1248529 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1248456 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:34.583 { 00:22:34.583 "params": { 00:22:34.583 "name": "Nvme$subsystem", 00:22:34.583 "trtype": "$TEST_TRANSPORT", 00:22:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.583 "adrfam": "ipv4", 00:22:34.583 "trsvcid": "$NVMF_PORT", 00:22:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.583 "hdgst": ${hdgst:-false}, 00:22:34.583 "ddgst": ${ddgst:-false} 00:22:34.583 }, 00:22:34.583 "method": "bdev_nvme_attach_controller" 00:22:34.583 } 00:22:34.583 EOF 00:22:34.583 )") 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:34.583 { 00:22:34.583 "params": { 00:22:34.583 "name": "Nvme$subsystem", 00:22:34.583 "trtype": "$TEST_TRANSPORT", 00:22:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.583 "adrfam": "ipv4", 00:22:34.583 "trsvcid": "$NVMF_PORT", 00:22:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.583 "hdgst": ${hdgst:-false}, 00:22:34.583 "ddgst": ${ddgst:-false} 00:22:34.583 }, 00:22:34.583 "method": "bdev_nvme_attach_controller" 00:22:34.583 } 00:22:34.583 EOF 00:22:34.583 )") 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:34.583 { 00:22:34.583 "params": { 00:22:34.583 "name": "Nvme$subsystem", 00:22:34.583 "trtype": "$TEST_TRANSPORT", 00:22:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.583 "adrfam": "ipv4", 00:22:34.583 "trsvcid": "$NVMF_PORT", 00:22:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.583 "hdgst": ${hdgst:-false}, 00:22:34.583 "ddgst": ${ddgst:-false} 00:22:34.583 }, 00:22:34.583 "method": "bdev_nvme_attach_controller" 00:22:34.583 } 00:22:34.583 EOF 00:22:34.583 )") 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:34.583 { 00:22:34.583 "params": { 00:22:34.583 "name": "Nvme$subsystem", 00:22:34.583 "trtype": "$TEST_TRANSPORT", 00:22:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.583 "adrfam": "ipv4", 00:22:34.583 "trsvcid": "$NVMF_PORT", 00:22:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.583 "hdgst": ${hdgst:-false}, 00:22:34.583 "ddgst": ${ddgst:-false} 00:22:34.583 }, 00:22:34.583 "method": "bdev_nvme_attach_controller" 00:22:34.583 } 00:22:34.583 EOF 00:22:34.583 )") 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:34.583 { 00:22:34.583 "params": { 00:22:34.583 "name": "Nvme$subsystem", 00:22:34.583 "trtype": "$TEST_TRANSPORT", 00:22:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.583 "adrfam": "ipv4", 00:22:34.583 "trsvcid": "$NVMF_PORT", 00:22:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.583 "hdgst": ${hdgst:-false}, 00:22:34.583 "ddgst": ${ddgst:-false} 00:22:34.583 }, 00:22:34.583 "method": "bdev_nvme_attach_controller" 00:22:34.583 } 00:22:34.583 EOF 00:22:34.583 )") 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:34.583 { 00:22:34.583 "params": { 00:22:34.583 "name": "Nvme$subsystem", 00:22:34.583 "trtype": "$TEST_TRANSPORT", 00:22:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.583 "adrfam": "ipv4", 00:22:34.583 "trsvcid": "$NVMF_PORT", 00:22:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.583 "hdgst": ${hdgst:-false}, 00:22:34.583 "ddgst": ${ddgst:-false} 00:22:34.583 }, 00:22:34.583 "method": "bdev_nvme_attach_controller" 00:22:34.583 } 00:22:34.583 EOF 00:22:34.583 )") 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:34.583 { 00:22:34.583 "params": { 00:22:34.583 "name": "Nvme$subsystem", 00:22:34.583 "trtype": "$TEST_TRANSPORT", 00:22:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.583 "adrfam": "ipv4", 00:22:34.583 "trsvcid": "$NVMF_PORT", 00:22:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.583 "hdgst": ${hdgst:-false}, 00:22:34.583 "ddgst": ${ddgst:-false} 00:22:34.583 }, 00:22:34.583 "method": "bdev_nvme_attach_controller" 00:22:34.583 } 00:22:34.583 EOF 00:22:34.583 )") 00:22:34.583 [2024-12-10 05:47:22.444974] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:22:34.583 [2024-12-10 05:47:22.445023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249005 ] 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:34.583 { 00:22:34.583 "params": { 00:22:34.583 "name": "Nvme$subsystem", 00:22:34.583 "trtype": "$TEST_TRANSPORT", 00:22:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.583 "adrfam": "ipv4", 00:22:34.583 "trsvcid": "$NVMF_PORT", 00:22:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.583 "hdgst": ${hdgst:-false}, 00:22:34.583 "ddgst": ${ddgst:-false} 00:22:34.583 }, 00:22:34.583 "method": "bdev_nvme_attach_controller" 00:22:34.583 } 00:22:34.583 EOF 00:22:34.583 )") 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:34.583 { 00:22:34.583 "params": { 00:22:34.583 "name": "Nvme$subsystem", 00:22:34.583 "trtype": "$TEST_TRANSPORT", 00:22:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.583 "adrfam": "ipv4", 00:22:34.583 "trsvcid": "$NVMF_PORT", 00:22:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.583 "hdgst": ${hdgst:-false}, 00:22:34.583 "ddgst": ${ddgst:-false} 00:22:34.583 }, 00:22:34.583 "method": "bdev_nvme_attach_controller" 00:22:34.583 } 00:22:34.583 EOF 00:22:34.583 )") 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:34.583 { 00:22:34.583 "params": { 00:22:34.583 "name": "Nvme$subsystem", 00:22:34.583 "trtype": "$TEST_TRANSPORT", 00:22:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.583 "adrfam": "ipv4", 00:22:34.583 "trsvcid": "$NVMF_PORT", 00:22:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.583 "hdgst": ${hdgst:-false}, 00:22:34.583 "ddgst": ${ddgst:-false} 00:22:34.583 }, 00:22:34.583 "method": "bdev_nvme_attach_controller" 00:22:34.583 } 00:22:34.583 EOF 00:22:34.583 )") 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:34.583 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:34.842 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:34.842 05:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:34.842 "params": { 00:22:34.842 "name": "Nvme1", 00:22:34.842 "trtype": "tcp", 00:22:34.842 "traddr": "10.0.0.2", 00:22:34.842 "adrfam": "ipv4", 00:22:34.842 "trsvcid": "4420", 00:22:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.842 "hdgst": false, 00:22:34.842 "ddgst": false 00:22:34.842 }, 00:22:34.842 "method": "bdev_nvme_attach_controller" 00:22:34.842 },{ 00:22:34.842 "params": { 00:22:34.842 "name": "Nvme2", 00:22:34.842 "trtype": "tcp", 00:22:34.842 "traddr": "10.0.0.2", 00:22:34.842 "adrfam": "ipv4", 00:22:34.842 "trsvcid": "4420", 00:22:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:34.842 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:34.842 "hdgst": false, 00:22:34.842 "ddgst": false 00:22:34.842 }, 00:22:34.842 "method": "bdev_nvme_attach_controller" 00:22:34.842 },{ 00:22:34.842 "params": { 00:22:34.842 "name": "Nvme3", 00:22:34.842 "trtype": "tcp", 00:22:34.842 "traddr": "10.0.0.2", 00:22:34.842 "adrfam": "ipv4", 00:22:34.842 "trsvcid": "4420", 00:22:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:34.842 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:34.842 "hdgst": false, 00:22:34.842 "ddgst": false 00:22:34.842 }, 00:22:34.842 "method": "bdev_nvme_attach_controller" 00:22:34.842 },{ 00:22:34.842 "params": { 00:22:34.842 "name": "Nvme4", 00:22:34.842 "trtype": "tcp", 00:22:34.842 "traddr": "10.0.0.2", 00:22:34.842 "adrfam": "ipv4", 00:22:34.842 "trsvcid": "4420", 00:22:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:34.842 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:34.842 "hdgst": false, 00:22:34.842 "ddgst": false 00:22:34.842 }, 00:22:34.842 "method": "bdev_nvme_attach_controller" 00:22:34.842 },{ 00:22:34.842 "params": { 00:22:34.842 "name": "Nvme5", 00:22:34.842 "trtype": "tcp", 00:22:34.842 "traddr": "10.0.0.2", 00:22:34.842 "adrfam": "ipv4", 00:22:34.842 "trsvcid": "4420", 00:22:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:34.842 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:34.842 "hdgst": false, 00:22:34.842 "ddgst": false 00:22:34.842 }, 00:22:34.842 "method": "bdev_nvme_attach_controller" 00:22:34.842 },{ 00:22:34.842 "params": { 00:22:34.842 "name": "Nvme6", 00:22:34.842 "trtype": "tcp", 00:22:34.842 "traddr": "10.0.0.2", 00:22:34.842 "adrfam": "ipv4", 00:22:34.842 "trsvcid": "4420", 00:22:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:34.842 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:34.842 "hdgst": false, 00:22:34.842 "ddgst": false 00:22:34.842 }, 00:22:34.842 "method": "bdev_nvme_attach_controller" 00:22:34.842 },{ 00:22:34.842 "params": { 00:22:34.842 "name": "Nvme7", 00:22:34.842 "trtype": "tcp", 00:22:34.842 "traddr": "10.0.0.2", 00:22:34.842 "adrfam": "ipv4", 00:22:34.842 "trsvcid": "4420", 00:22:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:34.842 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:34.842 "hdgst": false, 00:22:34.842 "ddgst": false 00:22:34.842 }, 00:22:34.842 "method": "bdev_nvme_attach_controller" 00:22:34.842 },{ 00:22:34.842 "params": { 00:22:34.842 "name": "Nvme8", 00:22:34.842 "trtype": "tcp", 00:22:34.842 "traddr": "10.0.0.2", 00:22:34.842 "adrfam": "ipv4", 00:22:34.842 "trsvcid": "4420", 00:22:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:34.842 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:34.842 "hdgst": false, 00:22:34.842 "ddgst": false 00:22:34.842 }, 00:22:34.842 "method": "bdev_nvme_attach_controller" 00:22:34.842 },{ 00:22:34.842 "params": { 00:22:34.842 "name": "Nvme9", 00:22:34.842 "trtype": "tcp", 00:22:34.842 "traddr": "10.0.0.2", 00:22:34.842 "adrfam": "ipv4", 00:22:34.842 "trsvcid": "4420", 00:22:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:34.842 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:34.842 "hdgst": false, 00:22:34.842 "ddgst": false 00:22:34.842 }, 00:22:34.842 "method": "bdev_nvme_attach_controller" 00:22:34.842 },{ 00:22:34.842 "params": { 00:22:34.842 "name": "Nvme10", 00:22:34.842 "trtype": "tcp", 00:22:34.842 "traddr": "10.0.0.2", 00:22:34.842 "adrfam": "ipv4", 00:22:34.842 "trsvcid": "4420", 00:22:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:34.842 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:34.842 "hdgst": false, 00:22:34.842 "ddgst": false 00:22:34.842 }, 00:22:34.842 "method": "bdev_nvme_attach_controller" 00:22:34.842 }' 00:22:34.842 [2024-12-10 05:47:22.520190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.842 [2024-12-10 05:47:22.559698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.217 Running I/O for 1 seconds... 00:22:37.409 2257.00 IOPS, 141.06 MiB/s 00:22:37.409 Latency(us) 00:22:37.409 [2024-12-10T04:47:25.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.409 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.409 Verification LBA range: start 0x0 length 0x400 00:22:37.409 Nvme1n1 : 1.05 242.96 15.18 0.00 0.00 260922.27 19848.05 215707.06 00:22:37.409 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.409 Verification LBA range: start 0x0 length 0x400 00:22:37.409 Nvme2n1 : 1.06 241.99 15.12 0.00 0.00 258122.36 16477.62 218702.99 00:22:37.409 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.409 Verification LBA range: start 0x0 length 0x400 00:22:37.409 Nvme3n1 : 1.11 287.98 18.00 0.00 0.00 214028.97 14792.41 227690.79 00:22:37.409 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.409 Verification LBA range: start 0x0 length 0x400 00:22:37.409 Nvme4n1 : 1.08 305.58 19.10 0.00 0.00 194069.58 3526.46 199728.76 00:22:37.409 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.409 Verification LBA range: start 0x0 length 0x400 00:22:37.409 Nvme5n1 : 1.12 294.03 18.38 0.00 0.00 201487.24 8800.55 209715.20 00:22:37.409 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.409 Verification LBA range: start 0x0 length 0x400 00:22:37.409 Nvme6n1 : 1.12 285.10 17.82 0.00 0.00 206146.61 19972.88 201726.05 00:22:37.409 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.409 Verification LBA range: start 0x0 length 0x400 00:22:37.409 Nvme7n1 : 1.12 285.61 17.85 0.00 0.00 203494.25 15853.47 216705.71 00:22:37.409 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.409 Verification LBA range: start 0x0 length 0x400 00:22:37.409 Nvme8n1 : 1.13 284.16 17.76 0.00 0.00 201549.04 13981.01 233682.65 00:22:37.409 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.409 Verification LBA range: start 0x0 length 0x400 00:22:37.409 Nvme9n1 : 1.13 282.44 17.65 0.00 0.00 199904.69 15354.15 222697.57 00:22:37.409 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.409 Verification LBA range: start 0x0 length 0x400 00:22:37.409 Nvme10n1 : 1.16 330.19 20.64 0.00 0.00 168961.85 3885.35 228689.43 00:22:37.409 [2024-12-10T04:47:25.305Z] =================================================================================================================== 00:22:37.409 [2024-12-10T04:47:25.305Z] Total : 2840.04 177.50 0.00 0.00 207962.33 3526.46 233682.65 00:22:37.409 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:37.409 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:37.409 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:37.409 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:37.409 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:37.409 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.409 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:37.409 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.409 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:37.409 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.409 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.409 rmmod nvme_tcp 00:22:37.409 rmmod nvme_fabrics 00:22:37.409 rmmod nvme_keyring 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1248456 ']' 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1248456 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1248456 ']' 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1248456 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1248456 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1248456' 00:22:37.667 killing process with pid 1248456 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1248456 00:22:37.667 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1248456 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.926 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.528 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:40.528 00:22:40.528 real 0m14.812s 00:22:40.528 user 0m31.358s 00:22:40.528 sys 0m5.820s 00:22:40.528 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.528 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.528 ************************************ 00:22:40.528 END TEST nvmf_shutdown_tc1 00:22:40.528 ************************************ 00:22:40.528 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:40.529 ************************************ 00:22:40.529 START TEST nvmf_shutdown_tc2 00:22:40.529 ************************************ 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:40.529 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:40.529 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:40.529 Found net devices under 0000:af:00.0: cvl_0_0 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:40.529 Found net devices under 0000:af:00.1: cvl_0_1 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.529 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.530 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.530 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.530 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.530 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.530 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.530 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.530 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.530 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.530 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.530 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:22:40.530 00:22:40.530 --- 10.0.0.2 ping statistics --- 00:22:40.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.530 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:22:40.530 00:22:40.530 --- 10.0.0.1 ping statistics --- 00:22:40.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.530 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1250028 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1250028 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1250028 ']' 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.530 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.530 [2024-12-10 05:47:28.242978] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:22:40.530 [2024-12-10 05:47:28.243028] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.530 [2024-12-10 05:47:28.322343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.530 [2024-12-10 05:47:28.363420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.530 [2024-12-10 05:47:28.363456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.530 [2024-12-10 05:47:28.363463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.530 [2024-12-10 05:47:28.363469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.530 [2024-12-10 05:47:28.363475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.530 [2024-12-10 05:47:28.364982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.530 [2024-12-10 05:47:28.365092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.530 [2024-12-10 05:47:28.365209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.530 [2024-12-10 05:47:28.365210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.789 [2024-12-10 05:47:28.502643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.789 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.789 Malloc1 00:22:40.790 [2024-12-10 05:47:28.614249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.790 Malloc2 00:22:40.790 Malloc3 00:22:41.048 Malloc4 00:22:41.048 Malloc5 00:22:41.048 Malloc6 00:22:41.048 Malloc7 00:22:41.048 Malloc8 00:22:41.307 Malloc9 00:22:41.307 Malloc10 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1250298 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1250298 /var/tmp/bdevperf.sock 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1250298 ']' 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.307 { 00:22:41.307 "params": { 00:22:41.307 "name": "Nvme$subsystem", 00:22:41.307 "trtype": "$TEST_TRANSPORT", 00:22:41.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.307 "adrfam": "ipv4", 00:22:41.307 "trsvcid": "$NVMF_PORT", 00:22:41.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.307 "hdgst": ${hdgst:-false}, 00:22:41.307 "ddgst": ${ddgst:-false} 00:22:41.307 }, 00:22:41.307 "method": "bdev_nvme_attach_controller" 00:22:41.307 } 00:22:41.307 EOF 00:22:41.307 )") 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.307 { 00:22:41.307 "params": { 00:22:41.307 "name": "Nvme$subsystem", 00:22:41.307 "trtype": "$TEST_TRANSPORT", 00:22:41.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.307 "adrfam": "ipv4", 00:22:41.307 "trsvcid": "$NVMF_PORT", 00:22:41.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.307 "hdgst": ${hdgst:-false}, 00:22:41.307 "ddgst": ${ddgst:-false} 00:22:41.307 }, 00:22:41.307 "method": "bdev_nvme_attach_controller" 00:22:41.307 } 00:22:41.307 EOF 00:22:41.307 )") 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.307 { 00:22:41.307 "params": { 00:22:41.307 "name": "Nvme$subsystem", 00:22:41.307 "trtype": "$TEST_TRANSPORT", 00:22:41.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.307 "adrfam": "ipv4", 00:22:41.307 "trsvcid": "$NVMF_PORT", 00:22:41.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.307 "hdgst": ${hdgst:-false}, 00:22:41.307 "ddgst": ${ddgst:-false} 00:22:41.307 }, 00:22:41.307 "method": "bdev_nvme_attach_controller" 00:22:41.307 } 00:22:41.307 EOF 00:22:41.307 )") 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.307 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.307 { 00:22:41.307 "params": { 00:22:41.307 "name": "Nvme$subsystem", 00:22:41.307 "trtype": "$TEST_TRANSPORT", 00:22:41.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.307 "adrfam": "ipv4", 00:22:41.307 "trsvcid": "$NVMF_PORT", 00:22:41.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.307 "hdgst": ${hdgst:-false}, 00:22:41.308 "ddgst": ${ddgst:-false} 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 } 00:22:41.308 EOF 00:22:41.308 )") 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.308 { 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme$subsystem", 00:22:41.308 "trtype": "$TEST_TRANSPORT", 00:22:41.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "$NVMF_PORT", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.308 "hdgst": ${hdgst:-false}, 00:22:41.308 "ddgst": ${ddgst:-false} 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 } 00:22:41.308 EOF 00:22:41.308 )") 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.308 { 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme$subsystem", 00:22:41.308 "trtype": "$TEST_TRANSPORT", 00:22:41.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "$NVMF_PORT", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.308 "hdgst": ${hdgst:-false}, 00:22:41.308 "ddgst": ${ddgst:-false} 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 } 00:22:41.308 EOF 00:22:41.308 )") 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.308 { 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme$subsystem", 00:22:41.308 "trtype": "$TEST_TRANSPORT", 00:22:41.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "$NVMF_PORT", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.308 "hdgst": ${hdgst:-false}, 00:22:41.308 "ddgst": ${ddgst:-false} 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 } 00:22:41.308 EOF 00:22:41.308 )") 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:41.308 [2024-12-10 05:47:29.091288] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:22:41.308 [2024-12-10 05:47:29.091339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250298 ] 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.308 { 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme$subsystem", 00:22:41.308 "trtype": "$TEST_TRANSPORT", 00:22:41.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "$NVMF_PORT", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.308 "hdgst": ${hdgst:-false}, 00:22:41.308 "ddgst": ${ddgst:-false} 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 } 00:22:41.308 EOF 00:22:41.308 )") 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.308 { 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme$subsystem", 00:22:41.308 "trtype": "$TEST_TRANSPORT", 00:22:41.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "$NVMF_PORT", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.308 "hdgst": ${hdgst:-false}, 00:22:41.308 "ddgst": ${ddgst:-false} 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 } 00:22:41.308 EOF 00:22:41.308 )") 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.308 { 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme$subsystem", 00:22:41.308 "trtype": "$TEST_TRANSPORT", 00:22:41.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "$NVMF_PORT", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.308 "hdgst": ${hdgst:-false}, 00:22:41.308 "ddgst": ${ddgst:-false} 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 } 00:22:41.308 EOF 00:22:41.308 )") 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:41.308 05:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme1", 00:22:41.308 "trtype": "tcp", 00:22:41.308 "traddr": "10.0.0.2", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "4420", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:41.308 "hdgst": false, 00:22:41.308 "ddgst": false 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 },{ 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme2", 00:22:41.308 "trtype": "tcp", 00:22:41.308 "traddr": "10.0.0.2", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "4420", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:41.308 "hdgst": false, 00:22:41.308 "ddgst": false 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 },{ 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme3", 00:22:41.308 "trtype": "tcp", 00:22:41.308 "traddr": "10.0.0.2", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "4420", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:41.308 "hdgst": false, 00:22:41.308 "ddgst": false 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 },{ 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme4", 00:22:41.308 "trtype": "tcp", 00:22:41.308 "traddr": "10.0.0.2", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "4420", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:41.308 "hdgst": false, 00:22:41.308 "ddgst": false 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 },{ 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme5", 00:22:41.308 "trtype": "tcp", 00:22:41.308 "traddr": "10.0.0.2", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "4420", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:41.308 "hdgst": false, 00:22:41.308 "ddgst": false 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 },{ 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme6", 00:22:41.308 "trtype": "tcp", 00:22:41.308 "traddr": "10.0.0.2", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "4420", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:41.308 "hdgst": false, 00:22:41.308 "ddgst": false 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 },{ 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme7", 00:22:41.308 "trtype": "tcp", 00:22:41.308 "traddr": "10.0.0.2", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "4420", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:41.308 "hdgst": false, 00:22:41.308 "ddgst": false 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 },{ 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme8", 00:22:41.308 "trtype": "tcp", 00:22:41.308 "traddr": "10.0.0.2", 00:22:41.308 "adrfam": "ipv4", 00:22:41.308 "trsvcid": "4420", 00:22:41.308 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:41.308 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:41.308 "hdgst": false, 00:22:41.308 "ddgst": false 00:22:41.308 }, 00:22:41.308 "method": "bdev_nvme_attach_controller" 00:22:41.308 },{ 00:22:41.308 "params": { 00:22:41.308 "name": "Nvme9", 00:22:41.308 "trtype": "tcp", 00:22:41.309 "traddr": "10.0.0.2", 00:22:41.309 "adrfam": "ipv4", 00:22:41.309 "trsvcid": "4420", 00:22:41.309 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:41.309 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:41.309 "hdgst": false, 00:22:41.309 "ddgst": false 00:22:41.309 }, 00:22:41.309 "method": "bdev_nvme_attach_controller" 00:22:41.309 },{ 00:22:41.309 "params": { 00:22:41.309 "name": "Nvme10", 00:22:41.309 "trtype": "tcp", 00:22:41.309 "traddr": "10.0.0.2", 00:22:41.309 "adrfam": "ipv4", 00:22:41.309 "trsvcid": "4420", 00:22:41.309 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:41.309 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:41.309 "hdgst": false, 00:22:41.309 "ddgst": false 00:22:41.309 }, 00:22:41.309 "method": "bdev_nvme_attach_controller" 00:22:41.309 }' 00:22:41.309 [2024-12-10 05:47:29.169993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.567 [2024-12-10 05:47:29.210357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.942 Running I/O for 10 seconds... 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.200 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.200 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.200 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=73 00:22:43.200 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 73 -ge 100 ']' 00:22:43.200 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:43.459 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:43.459 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:43.459 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:43.459 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:43.459 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.459 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.459 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.459 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:43.459 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:43.459 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1250298 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1250298 ']' 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1250298 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1250298 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1250298' 00:22:43.721 killing process with pid 1250298 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1250298 00:22:43.721 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1250298 00:22:43.721 Received shutdown signal, test time was about 0.848210 seconds 00:22:43.721 00:22:43.721 Latency(us) 00:22:43.721 [2024-12-10T04:47:31.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.721 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.721 Verification LBA range: start 0x0 length 0x400 00:22:43.721 Nvme1n1 : 0.84 317.05 19.82 0.00 0.00 198241.81 6584.81 175761.31 00:22:43.721 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.721 Verification LBA range: start 0x0 length 0x400 00:22:43.721 Nvme2n1 : 0.85 302.04 18.88 0.00 0.00 205677.47 15104.49 219701.64 00:22:43.721 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.721 Verification LBA range: start 0x0 length 0x400 00:22:43.721 Nvme3n1 : 0.83 312.12 19.51 0.00 0.00 194654.58 1755.43 204721.98 00:22:43.721 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.721 Verification LBA range: start 0x0 length 0x400 00:22:43.721 Nvme4n1 : 0.83 306.64 19.17 0.00 0.00 194694.34 16477.62 242670.45 00:22:43.721 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.721 Verification LBA range: start 0x0 length 0x400 00:22:43.721 Nvme5n1 : 0.82 234.97 14.69 0.00 0.00 248826.47 28711.01 200727.41 00:22:43.721 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.721 Verification LBA range: start 0x0 length 0x400 00:22:43.721 Nvme6n1 : 0.85 302.88 18.93 0.00 0.00 189668.45 26464.06 222697.57 00:22:43.721 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.721 Verification LBA range: start 0x0 length 0x400 00:22:43.722 Nvme7n1 : 0.81 236.47 14.78 0.00 0.00 236754.65 13044.78 216705.71 00:22:43.722 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.722 Verification LBA range: start 0x0 length 0x400 00:22:43.722 Nvme8n1 : 0.84 304.43 19.03 0.00 0.00 180950.55 16852.11 204721.98 00:22:43.722 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.722 Verification LBA range: start 0x0 length 0x400 00:22:43.722 Nvme9n1 : 0.82 233.61 14.60 0.00 0.00 229985.20 18100.42 223696.21 00:22:43.722 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.722 Verification LBA range: start 0x0 length 0x400 00:22:43.722 Nvme10n1 : 0.83 232.67 14.54 0.00 0.00 226002.98 17850.76 241671.80 00:22:43.722 [2024-12-10T04:47:31.618Z] =================================================================================================================== 00:22:43.722 [2024-12-10T04:47:31.618Z] Total : 2782.88 173.93 0.00 0.00 207725.17 1755.43 242670.45 00:22:43.982 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1250028 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.917 rmmod nvme_tcp 00:22:44.917 rmmod nvme_fabrics 00:22:44.917 rmmod nvme_keyring 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1250028 ']' 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1250028 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1250028 ']' 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1250028 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1250028 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1250028' 00:22:44.917 killing process with pid 1250028 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1250028 00:22:44.917 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1250028 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.485 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.394 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:47.394 00:22:47.394 real 0m7.344s 00:22:47.394 user 0m21.542s 00:22:47.394 sys 0m1.394s 00:22:47.394 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.394 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.394 ************************************ 00:22:47.394 END TEST nvmf_shutdown_tc2 00:22:47.394 ************************************ 00:22:47.394 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:47.394 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:47.394 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.395 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:47.654 ************************************ 00:22:47.655 START TEST nvmf_shutdown_tc3 00:22:47.655 ************************************ 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:47.655 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:47.655 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:47.655 Found net devices under 0000:af:00.0: cvl_0_0 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:47.655 Found net devices under 0000:af:00.1: cvl_0_1 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:47.655 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.656 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.656 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:47.656 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:47.656 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.656 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.656 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.656 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.656 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:47.656 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.914 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.914 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.914 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:47.914 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:47.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:22:47.914 00:22:47.914 --- 10.0.0.2 ping statistics --- 00:22:47.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.914 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:22:47.915 00:22:47.915 --- 10.0.0.1 ping statistics --- 00:22:47.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.915 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1251407 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1251407 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1251407 ']' 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.915 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.915 [2024-12-10 05:47:35.675746] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:22:47.915 [2024-12-10 05:47:35.675797] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.915 [2024-12-10 05:47:35.756206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.915 [2024-12-10 05:47:35.796962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.915 [2024-12-10 05:47:35.797000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.915 [2024-12-10 05:47:35.797007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.915 [2024-12-10 05:47:35.797014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.915 [2024-12-10 05:47:35.797019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.915 [2024-12-10 05:47:35.798549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.915 [2024-12-10 05:47:35.798662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.915 [2024-12-10 05:47:35.798678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:47.915 [2024-12-10 05:47:35.798683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 [2024-12-10 05:47:35.936235] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.174 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 Malloc1 00:22:48.174 [2024-12-10 05:47:36.044759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.432 Malloc2 00:22:48.432 Malloc3 00:22:48.432 Malloc4 00:22:48.432 Malloc5 00:22:48.432 Malloc6 00:22:48.432 Malloc7 00:22:48.432 Malloc8 00:22:48.691 Malloc9 00:22:48.691 Malloc10 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1251590 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1251590 /var/tmp/bdevperf.sock 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1251590 ']' 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.691 { 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme$subsystem", 00:22:48.691 "trtype": "$TEST_TRANSPORT", 00:22:48.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.691 "adrfam": "ipv4", 00:22:48.691 "trsvcid": "$NVMF_PORT", 00:22:48.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.691 "hdgst": ${hdgst:-false}, 00:22:48.691 "ddgst": ${ddgst:-false} 00:22:48.691 }, 00:22:48.691 "method": "bdev_nvme_attach_controller" 00:22:48.691 } 00:22:48.691 EOF 00:22:48.691 )") 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.691 { 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme$subsystem", 00:22:48.691 "trtype": "$TEST_TRANSPORT", 00:22:48.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.691 "adrfam": "ipv4", 00:22:48.691 "trsvcid": "$NVMF_PORT", 00:22:48.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.691 "hdgst": ${hdgst:-false}, 00:22:48.691 "ddgst": ${ddgst:-false} 00:22:48.691 }, 00:22:48.691 "method": "bdev_nvme_attach_controller" 00:22:48.691 } 00:22:48.691 EOF 00:22:48.691 )") 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.691 { 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme$subsystem", 00:22:48.691 "trtype": "$TEST_TRANSPORT", 00:22:48.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.691 "adrfam": "ipv4", 00:22:48.691 "trsvcid": "$NVMF_PORT", 00:22:48.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.691 "hdgst": ${hdgst:-false}, 00:22:48.691 "ddgst": ${ddgst:-false} 00:22:48.691 }, 00:22:48.691 "method": "bdev_nvme_attach_controller" 00:22:48.691 } 00:22:48.691 EOF 00:22:48.691 )") 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.691 { 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme$subsystem", 00:22:48.691 "trtype": "$TEST_TRANSPORT", 00:22:48.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.691 "adrfam": "ipv4", 00:22:48.691 "trsvcid": "$NVMF_PORT", 00:22:48.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.691 "hdgst": ${hdgst:-false}, 00:22:48.691 "ddgst": ${ddgst:-false} 00:22:48.691 }, 00:22:48.691 "method": "bdev_nvme_attach_controller" 00:22:48.691 } 00:22:48.691 EOF 00:22:48.691 )") 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.691 { 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme$subsystem", 00:22:48.691 "trtype": "$TEST_TRANSPORT", 00:22:48.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.691 "adrfam": "ipv4", 00:22:48.691 "trsvcid": "$NVMF_PORT", 00:22:48.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.691 "hdgst": ${hdgst:-false}, 00:22:48.691 "ddgst": ${ddgst:-false} 00:22:48.691 }, 00:22:48.691 "method": "bdev_nvme_attach_controller" 00:22:48.691 } 00:22:48.691 EOF 00:22:48.691 )") 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.691 { 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme$subsystem", 00:22:48.691 "trtype": "$TEST_TRANSPORT", 00:22:48.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.691 "adrfam": "ipv4", 00:22:48.691 "trsvcid": "$NVMF_PORT", 00:22:48.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.691 "hdgst": ${hdgst:-false}, 00:22:48.691 "ddgst": ${ddgst:-false} 00:22:48.691 }, 00:22:48.691 "method": "bdev_nvme_attach_controller" 00:22:48.691 } 00:22:48.691 EOF 00:22:48.691 )") 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.691 { 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme$subsystem", 00:22:48.691 "trtype": "$TEST_TRANSPORT", 00:22:48.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.691 "adrfam": "ipv4", 00:22:48.691 "trsvcid": "$NVMF_PORT", 00:22:48.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.691 "hdgst": ${hdgst:-false}, 00:22:48.691 "ddgst": ${ddgst:-false} 00:22:48.691 }, 00:22:48.691 "method": "bdev_nvme_attach_controller" 00:22:48.691 } 00:22:48.691 EOF 00:22:48.691 )") 00:22:48.691 [2024-12-10 05:47:36.511726] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:22:48.691 [2024-12-10 05:47:36.511777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251590 ] 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.691 { 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme$subsystem", 00:22:48.691 "trtype": "$TEST_TRANSPORT", 00:22:48.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.691 "adrfam": "ipv4", 00:22:48.691 "trsvcid": "$NVMF_PORT", 00:22:48.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.691 "hdgst": ${hdgst:-false}, 00:22:48.691 "ddgst": ${ddgst:-false} 00:22:48.691 }, 00:22:48.691 "method": "bdev_nvme_attach_controller" 00:22:48.691 } 00:22:48.691 EOF 00:22:48.691 )") 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.691 { 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme$subsystem", 00:22:48.691 "trtype": "$TEST_TRANSPORT", 00:22:48.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.691 "adrfam": "ipv4", 00:22:48.691 "trsvcid": "$NVMF_PORT", 00:22:48.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.691 "hdgst": ${hdgst:-false}, 00:22:48.691 "ddgst": ${ddgst:-false} 00:22:48.691 }, 00:22:48.691 "method": "bdev_nvme_attach_controller" 00:22:48.691 } 00:22:48.691 EOF 00:22:48.691 )") 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.691 { 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme$subsystem", 00:22:48.691 "trtype": "$TEST_TRANSPORT", 00:22:48.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.691 "adrfam": "ipv4", 00:22:48.691 "trsvcid": "$NVMF_PORT", 00:22:48.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.691 "hdgst": ${hdgst:-false}, 00:22:48.691 "ddgst": ${ddgst:-false} 00:22:48.691 }, 00:22:48.691 "method": "bdev_nvme_attach_controller" 00:22:48.691 } 00:22:48.691 EOF 00:22:48.691 )") 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:48.691 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme1", 00:22:48.691 "trtype": "tcp", 00:22:48.691 "traddr": "10.0.0.2", 00:22:48.691 "adrfam": "ipv4", 00:22:48.691 "trsvcid": "4420", 00:22:48.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.691 "hdgst": false, 00:22:48.691 "ddgst": false 00:22:48.691 }, 00:22:48.691 "method": "bdev_nvme_attach_controller" 00:22:48.691 },{ 00:22:48.691 "params": { 00:22:48.691 "name": "Nvme2", 00:22:48.692 "trtype": "tcp", 00:22:48.692 "traddr": "10.0.0.2", 00:22:48.692 "adrfam": "ipv4", 00:22:48.692 "trsvcid": "4420", 00:22:48.692 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:48.692 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:48.692 "hdgst": false, 00:22:48.692 "ddgst": false 00:22:48.692 }, 00:22:48.692 "method": "bdev_nvme_attach_controller" 00:22:48.692 },{ 00:22:48.692 "params": { 00:22:48.692 "name": "Nvme3", 00:22:48.692 "trtype": "tcp", 00:22:48.692 "traddr": "10.0.0.2", 00:22:48.692 "adrfam": "ipv4", 00:22:48.692 "trsvcid": "4420", 00:22:48.692 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:48.692 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:48.692 "hdgst": false, 00:22:48.692 "ddgst": false 00:22:48.692 }, 00:22:48.692 "method": "bdev_nvme_attach_controller" 00:22:48.692 },{ 00:22:48.692 "params": { 00:22:48.692 "name": "Nvme4", 00:22:48.692 "trtype": "tcp", 00:22:48.692 "traddr": "10.0.0.2", 00:22:48.692 "adrfam": "ipv4", 00:22:48.692 "trsvcid": "4420", 00:22:48.692 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:48.692 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:48.692 "hdgst": false, 00:22:48.692 "ddgst": false 00:22:48.692 }, 00:22:48.692 "method": "bdev_nvme_attach_controller" 00:22:48.692 },{ 00:22:48.692 "params": { 00:22:48.692 "name": "Nvme5", 00:22:48.692 "trtype": "tcp", 00:22:48.692 "traddr": "10.0.0.2", 00:22:48.692 "adrfam": "ipv4", 00:22:48.692 "trsvcid": "4420", 00:22:48.692 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:48.692 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:48.692 "hdgst": false, 00:22:48.692 "ddgst": false 00:22:48.692 }, 00:22:48.692 "method": "bdev_nvme_attach_controller" 00:22:48.692 },{ 00:22:48.692 "params": { 00:22:48.692 "name": "Nvme6", 00:22:48.692 "trtype": "tcp", 00:22:48.692 "traddr": "10.0.0.2", 00:22:48.692 "adrfam": "ipv4", 00:22:48.692 "trsvcid": "4420", 00:22:48.692 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:48.692 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:48.692 "hdgst": false, 00:22:48.692 "ddgst": false 00:22:48.692 }, 00:22:48.692 "method": "bdev_nvme_attach_controller" 00:22:48.692 },{ 00:22:48.692 "params": { 00:22:48.692 "name": "Nvme7", 00:22:48.692 "trtype": "tcp", 00:22:48.692 "traddr": "10.0.0.2", 00:22:48.692 "adrfam": "ipv4", 00:22:48.692 "trsvcid": "4420", 00:22:48.692 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:48.692 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:48.692 "hdgst": false, 00:22:48.692 "ddgst": false 00:22:48.692 }, 00:22:48.692 "method": "bdev_nvme_attach_controller" 00:22:48.692 },{ 00:22:48.692 "params": { 00:22:48.692 "name": "Nvme8", 00:22:48.692 "trtype": "tcp", 00:22:48.692 "traddr": "10.0.0.2", 00:22:48.692 "adrfam": "ipv4", 00:22:48.692 "trsvcid": "4420", 00:22:48.692 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:48.692 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:48.692 "hdgst": false, 00:22:48.692 "ddgst": false 00:22:48.692 }, 00:22:48.692 "method": "bdev_nvme_attach_controller" 00:22:48.692 },{ 00:22:48.692 "params": { 00:22:48.692 "name": "Nvme9", 00:22:48.692 "trtype": "tcp", 00:22:48.692 "traddr": "10.0.0.2", 00:22:48.692 "adrfam": "ipv4", 00:22:48.692 "trsvcid": "4420", 00:22:48.692 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:48.692 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:48.692 "hdgst": false, 00:22:48.692 "ddgst": false 00:22:48.692 }, 00:22:48.692 "method": "bdev_nvme_attach_controller" 00:22:48.692 },{ 00:22:48.692 "params": { 00:22:48.692 "name": "Nvme10", 00:22:48.692 "trtype": "tcp", 00:22:48.692 "traddr": "10.0.0.2", 00:22:48.692 "adrfam": "ipv4", 00:22:48.692 "trsvcid": "4420", 00:22:48.692 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:48.692 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:48.692 "hdgst": false, 00:22:48.692 "ddgst": false 00:22:48.692 }, 00:22:48.692 "method": "bdev_nvme_attach_controller" 00:22:48.692 }' 00:22:48.950 [2024-12-10 05:47:36.588776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.950 [2024-12-10 05:47:36.628507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.322 Running I/O for 10 seconds... 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:50.580 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:50.581 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:50.581 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.581 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.581 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.839 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:50.839 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:50.839 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:50.839 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:50.839 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1251407 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1251407 ']' 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1251407 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1251407 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1251407' 00:22:51.113 killing process with pid 1251407 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1251407 00:22:51.113 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1251407 00:22:51.113 [2024-12-10 05:47:38.834733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.834994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.113 [2024-12-10 05:47:38.835121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.835208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3840 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.836699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a63f0 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.837751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.837764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.837771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.837778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.837785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.837793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.837800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.837807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.837814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.837822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.114 [2024-12-10 05:47:38.837830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.837999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.838198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3d10 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.115 [2024-12-10 05:47:38.839663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.839862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a41e0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.116 [2024-12-10 05:47:38.841884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.841891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.841899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.841906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.841913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.841921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.841928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.841934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4ba0 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.843995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.117 [2024-12-10 05:47:38.844119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5560 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.844896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.844928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.844940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.844949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.844957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.844955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with t[2024-12-10 05:47:38.844966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:22:51.118 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.844977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.844980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.844986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.844988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.844996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with t[2024-12-10 05:47:38.844996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2462350 is same he state(6) to be set 00:22:51.118 with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.845038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.845054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.845061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-10 05:47:38.845069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 he state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-10 05:47:38.845078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 he state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-10 05:47:38.845089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 he state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.845107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.845115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248ded0 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with t[2024-12-10 05:47:38.845142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(6) to be set 00:22:51.118 id:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.845152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.845160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.845171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.845179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-10 05:47:38.845187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 he state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.845202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.845209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.845217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c1a0 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.845252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.845259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.845269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.845277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.845285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 [2024-12-10 05:47:38.845292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.845299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-10 05:47:38.845306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.118 he state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b4d0 is same [2024-12-10 05:47:38.845316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with twith the state(6) to be set 00:22:51.118 he state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.118 [2024-12-10 05:47:38.845339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.118 [2024-12-10 05:47:38.845346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-10 05:47:38.845375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 he state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b2d0 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5a30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aeb30 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037490 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c610 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.845718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.119 [2024-12-10 05:47:38.845774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.119 [2024-12-10 05:47:38.845783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d6c0 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.119 [2024-12-10 05:47:38.846186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with t[2024-12-10 05:47:38.846397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:51.120 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with the state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:12[2024-12-10 05:47:38.846482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5f00 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 he state(6) to be set 00:22:51.120 [2024-12-10 05:47:38.846491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.120 [2024-12-10 05:47:38.846709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.120 [2024-12-10 05:47:38.846715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.846990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.846997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.121 [2024-12-10 05:47:38.847328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.121 [2024-12-10 05:47:38.847335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:51.122 [2024-12-10 05:47:38.847565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.847992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.847999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.848008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.848015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.848023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.848032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.848040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.848047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.122 [2024-12-10 05:47:38.848056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.122 [2024-12-10 05:47:38.848063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.848588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.123 [2024-12-10 05:47:38.848594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.123 [2024-12-10 05:47:38.850939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:51.123 [2024-12-10 05:47:38.850970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:51.123 [2024-12-10 05:47:38.850995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2462350 (9): Bad file descriptor 00:22:51.123 [2024-12-10 05:47:38.851007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202b2d0 (9): Bad file descriptor 00:22:51.123 [2024-12-10 05:47:38.852085] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:51.123 [2024-12-10 05:47:38.852235] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:51.123 [2024-12-10 05:47:38.852432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.123 [2024-12-10 05:47:38.852449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202b2d0 with addr=10.0.0.2, port=4420 00:22:51.123 [2024-12-10 05:47:38.852458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b2d0 is same with the state(6) to be set 00:22:51.123 [2024-12-10 05:47:38.852540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.123 [2024-12-10 05:47:38.852551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2462350 with addr=10.0.0.2, port=4420 00:22:51.123 [2024-12-10 05:47:38.852558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2462350 is same with the state(6) to be set 00:22:51.123 [2024-12-10 05:47:38.852609] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:51.123 [2024-12-10 05:47:38.852658] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:51.123 [2024-12-10 05:47:38.852702] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:51.123 [2024-12-10 05:47:38.852758] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:51.123 [2024-12-10 05:47:38.852824] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:51.123 [2024-12-10 05:47:38.852876] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:51.124 [2024-12-10 05:47:38.852900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202b2d0 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.852912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2462350 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.852998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:51.124 [2024-12-10 05:47:38.853008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:51.124 [2024-12-10 05:47:38.853018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:51.124 [2024-12-10 05:47:38.853027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:51.124 [2024-12-10 05:47:38.853036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:51.124 [2024-12-10 05:47:38.853046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:51.124 [2024-12-10 05:47:38.853054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:51.124 [2024-12-10 05:47:38.853061] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:51.124 [2024-12-10 05:47:38.854901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248ded0 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.854923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c1a0 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.854942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202b4d0 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.854957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24aeb30 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.854989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.124 [2024-12-10 05:47:38.855001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.855010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.124 [2024-12-10 05:47:38.855018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.855026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.124 [2024-12-10 05:47:38.855034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.855042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.124 [2024-12-10 05:47:38.855050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.855057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aed10 is same with the state(6) to be set 00:22:51.124 [2024-12-10 05:47:38.855077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2037490 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.855091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4c610 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.855107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202d6c0 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.861535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:51.124 [2024-12-10 05:47:38.861555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:51.124 [2024-12-10 05:47:38.861854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.124 [2024-12-10 05:47:38.861870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2462350 with addr=10.0.0.2, port=4420 00:22:51.124 [2024-12-10 05:47:38.861879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2462350 is same with the state(6) to be set 00:22:51.124 [2024-12-10 05:47:38.862020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.124 [2024-12-10 05:47:38.862034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202b2d0 with addr=10.0.0.2, port=4420 00:22:51.124 [2024-12-10 05:47:38.862042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b2d0 is same with the state(6) to be set 00:22:51.124 [2024-12-10 05:47:38.862081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2462350 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.862093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202b2d0 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.862128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:51.124 [2024-12-10 05:47:38.862137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:51.124 [2024-12-10 05:47:38.862146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:51.124 [2024-12-10 05:47:38.862155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:51.124 [2024-12-10 05:47:38.862163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:51.124 [2024-12-10 05:47:38.862176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:51.124 [2024-12-10 05:47:38.862183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:51.124 [2024-12-10 05:47:38.862189] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:51.124 [2024-12-10 05:47:38.864950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24aed10 (9): Bad file descriptor 00:22:51.124 [2024-12-10 05:47:38.865092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.124 [2024-12-10 05:47:38.865441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.124 [2024-12-10 05:47:38.865449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.865988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.865996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.866007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.866015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.866023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.866030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.866039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.866046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.866055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.866062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.866071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.866078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.866087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.866095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.866103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.125 [2024-12-10 05:47:38.866111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.125 [2024-12-10 05:47:38.866121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.866129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.866137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.866144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.866153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.866160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.866176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.866183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.866192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b3b0 is same with the state(6) to be set 00:22:51.126 [2024-12-10 05:47:38.867202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.126 [2024-12-10 05:47:38.867771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.126 [2024-12-10 05:47:38.867779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.867988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.867995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.868358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.868369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c580 is same with the state(6) to be set 00:22:51.127 [2024-12-10 05:47:38.869353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.869369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.869382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.869389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.869399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.869406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.869416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.869424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.869432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.869440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.869449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.869457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.869465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.869473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.127 [2024-12-10 05:47:38.869482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.127 [2024-12-10 05:47:38.869489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.869988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.869995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.870004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.870012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.870020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.870027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.870036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.870043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.870051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.870059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.870067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.870074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.870082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.870089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.870098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.870105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.870113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.870120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.870128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.128 [2024-12-10 05:47:38.870135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.128 [2024-12-10 05:47:38.870144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.870420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.870427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240eb80 is same with the state(6) to be set 00:22:51.129 [2024-12-10 05:47:38.871425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.129 [2024-12-10 05:47:38.871804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.129 [2024-12-10 05:47:38.871814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.871830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.871846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.871862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.871879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.871894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.871910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.871926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.871941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.871956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.871972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.871989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.871996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.130 [2024-12-10 05:47:38.872408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.130 [2024-12-10 05:47:38.872418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.872426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.872434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.872441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.872450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.872457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.872466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.872474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.872482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cf50 is same with the state(6) to be set 00:22:51.131 [2024-12-10 05:47:38.873462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.873988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.873996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.874003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.874011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.874021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.874029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.874038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.874047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.131 [2024-12-10 05:47:38.874055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.131 [2024-12-10 05:47:38.874065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.874522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.874530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3138bf0 is same with the state(6) to be set 00:22:51.132 [2024-12-10 05:47:38.875518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.132 [2024-12-10 05:47:38.875721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.132 [2024-12-10 05:47:38.875730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.875989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.875996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.133 [2024-12-10 05:47:38.876386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.133 [2024-12-10 05:47:38.876393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.876410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.876426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.876441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.876459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.876475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.876491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.876508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.876523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.876539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.876554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.876570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.876577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3386530 is same with the state(6) to be set 00:22:51.134 [2024-12-10 05:47:38.877570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.877984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.877993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.878000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.878010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.878017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.878027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.878035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.134 [2024-12-10 05:47:38.878044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.134 [2024-12-10 05:47:38.878051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.135 [2024-12-10 05:47:38.878637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.135 [2024-12-10 05:47:38.878645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227c3e0 is same with the state(6) to be set 00:22:51.135 [2024-12-10 05:47:38.879600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:51.135 [2024-12-10 05:47:38.879618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:51.135 [2024-12-10 05:47:38.879629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:51.136 [2024-12-10 05:47:38.879642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:51.136 [2024-12-10 05:47:38.879711] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:51.136 [2024-12-10 05:47:38.879728] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:51.136 [2024-12-10 05:47:38.879747] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:51.136 [2024-12-10 05:47:38.879827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:51.136 [2024-12-10 05:47:38.879840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:51.136 [2024-12-10 05:47:38.879851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:51.136 [2024-12-10 05:47:38.880137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.136 [2024-12-10 05:47:38.880152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2037490 with addr=10.0.0.2, port=4420 00:22:51.136 [2024-12-10 05:47:38.880162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037490 is same with the state(6) to be set 00:22:51.136 [2024-12-10 05:47:38.880394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.136 [2024-12-10 05:47:38.880405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202b4d0 with addr=10.0.0.2, port=4420 00:22:51.136 [2024-12-10 05:47:38.880413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b4d0 is same with the state(6) to be set 00:22:51.136 [2024-12-10 05:47:38.880609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.136 [2024-12-10 05:47:38.880620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202d6c0 with addr=10.0.0.2, port=4420 00:22:51.136 [2024-12-10 05:47:38.880628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d6c0 is same with the state(6) to be set 00:22:51.136 [2024-12-10 05:47:38.880762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.136 [2024-12-10 05:47:38.880773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c1a0 with addr=10.0.0.2, port=4420 00:22:51.136 [2024-12-10 05:47:38.880781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c1a0 is same with the state(6) to be set 00:22:51.136 [2024-12-10 05:47:38.882322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:51.136 [2024-12-10 05:47:38.882341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:51.136 [2024-12-10 05:47:38.882599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.136 [2024-12-10 05:47:38.882614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f4c610 with addr=10.0.0.2, port=4420 00:22:51.136 [2024-12-10 05:47:38.882622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c610 is same with the state(6) to be set 00:22:51.136 [2024-12-10 05:47:38.882842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.136 [2024-12-10 05:47:38.882855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24aeb30 with addr=10.0.0.2, port=4420 00:22:51.136 [2024-12-10 05:47:38.882863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aeb30 is same with the state(6) to be set 00:22:51.136 [2024-12-10 05:47:38.883079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.136 [2024-12-10 05:47:38.883093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248ded0 with addr=10.0.0.2, port=4420 00:22:51.136 [2024-12-10 05:47:38.883101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248ded0 is same with the state(6) to be set 00:22:51.136 [2024-12-10 05:47:38.883114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2037490 (9): Bad file descriptor 00:22:51.136 [2024-12-10 05:47:38.883124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202b4d0 (9): Bad file descriptor 00:22:51.136 [2024-12-10 05:47:38.883132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202d6c0 (9): Bad file descriptor 00:22:51.136 [2024-12-10 05:47:38.883146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c1a0 (9): Bad file descriptor 00:22:51.136 [2024-12-10 05:47:38.883235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.136 [2024-12-10 05:47:38.883657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.136 [2024-12-10 05:47:38.883666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.883989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.883998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.137 [2024-12-10 05:47:38.884317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.137 [2024-12-10 05:47:38.884325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227b0b0 is same with the state(6) to be set 00:22:51.137 task offset: 24576 on job bdev=Nvme4n1 fails 00:22:51.137 00:22:51.137 Latency(us) 00:22:51.137 [2024-12-10T04:47:39.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.137 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.137 Job: Nvme1n1 ended in about 0.75 seconds with error 00:22:51.138 Verification LBA range: start 0x0 length 0x400 00:22:51.138 Nvme1n1 : 0.75 171.20 10.70 85.60 0.00 246377.33 16727.28 220700.28 00:22:51.138 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.138 Job: Nvme2n1 ended in about 0.75 seconds with error 00:22:51.138 Verification LBA range: start 0x0 length 0x400 00:22:51.138 Nvme2n1 : 0.75 170.71 10.67 85.36 0.00 241922.76 31956.60 198730.12 00:22:51.138 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.138 Job: Nvme3n1 ended in about 0.75 seconds with error 00:22:51.138 Verification LBA range: start 0x0 length 0x400 00:22:51.138 Nvme3n1 : 0.75 260.69 16.29 85.12 0.00 175269.07 10173.68 214708.42 00:22:51.138 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.138 Job: Nvme4n1 ended in about 0.73 seconds with error 00:22:51.138 Verification LBA range: start 0x0 length 0x400 00:22:51.138 Nvme4n1 : 0.73 262.86 16.43 87.62 0.00 168775.92 16227.96 203723.34 00:22:51.138 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.138 Job: Nvme5n1 ended in about 0.75 seconds with error 00:22:51.138 Verification LBA range: start 0x0 length 0x400 00:22:51.138 Nvme5n1 : 0.75 169.78 10.61 84.89 0.00 227735.65 17850.76 214708.42 00:22:51.138 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.138 Job: Nvme6n1 ended in about 0.73 seconds with error 00:22:51.138 Verification LBA range: start 0x0 length 0x400 00:22:51.138 Nvme6n1 : 0.73 262.54 16.41 87.51 0.00 161272.20 5398.92 213709.78 00:22:51.138 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.138 Job: Nvme7n1 ended in about 0.76 seconds with error 00:22:51.138 Verification LBA range: start 0x0 length 0x400 00:22:51.138 Nvme7n1 : 0.76 169.32 10.58 84.66 0.00 217980.83 13856.18 218702.99 00:22:51.138 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.138 Job: Nvme8n1 ended in about 0.76 seconds with error 00:22:51.138 Verification LBA range: start 0x0 length 0x400 00:22:51.138 Nvme8n1 : 0.76 168.87 10.55 84.43 0.00 213488.56 14542.75 214708.42 00:22:51.138 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.138 Job: Nvme9n1 ended in about 0.77 seconds with error 00:22:51.138 Verification LBA range: start 0x0 length 0x400 00:22:51.138 Nvme9n1 : 0.77 167.16 10.45 83.58 0.00 211021.37 18225.25 217704.35 00:22:51.138 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.138 Job: Nvme10n1 ended in about 0.76 seconds with error 00:22:51.138 Verification LBA range: start 0x0 length 0x400 00:22:51.138 Nvme10n1 : 0.76 168.41 10.53 84.20 0.00 204079.95 19348.72 233682.65 00:22:51.138 [2024-12-10T04:47:39.034Z] =================================================================================================================== 00:22:51.138 [2024-12-10T04:47:39.034Z] Total : 1971.55 123.22 852.99 0.00 203252.70 5398.92 233682.65 00:22:51.138 [2024-12-10 05:47:38.916424] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:51.138 [2024-12-10 05:47:38.916474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:51.138 [2024-12-10 05:47:38.916779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.138 [2024-12-10 05:47:38.916799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202b2d0 with addr=10.0.0.2, port=4420 00:22:51.138 [2024-12-10 05:47:38.916810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b2d0 is same with the state(6) to be set 00:22:51.138 [2024-12-10 05:47:38.917036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.138 [2024-12-10 05:47:38.917049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2462350 with addr=10.0.0.2, port=4420 00:22:51.138 [2024-12-10 05:47:38.917056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2462350 is same with the state(6) to be set 00:22:51.138 [2024-12-10 05:47:38.917071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4c610 (9): Bad file descriptor 00:22:51.138 [2024-12-10 05:47:38.917083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24aeb30 (9): Bad file descriptor 00:22:51.138 [2024-12-10 05:47:38.917092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248ded0 (9): Bad file descriptor 00:22:51.138 [2024-12-10 05:47:38.917100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:51.138 [2024-12-10 05:47:38.917107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:51.138 [2024-12-10 05:47:38.917115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:51.138 [2024-12-10 05:47:38.917126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:51.138 [2024-12-10 05:47:38.917135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:51.138 [2024-12-10 05:47:38.917141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:51.138 [2024-12-10 05:47:38.917148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:51.138 [2024-12-10 05:47:38.917154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:51.138 [2024-12-10 05:47:38.917162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:51.138 [2024-12-10 05:47:38.917173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:51.138 [2024-12-10 05:47:38.917181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:51.138 [2024-12-10 05:47:38.917187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:51.138 [2024-12-10 05:47:38.917195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:51.138 [2024-12-10 05:47:38.917201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:51.138 [2024-12-10 05:47:38.917208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:51.138 [2024-12-10 05:47:38.917214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:51.138 [2024-12-10 05:47:38.917608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.138 [2024-12-10 05:47:38.917626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24aed10 with addr=10.0.0.2, port=4420 00:22:51.138 [2024-12-10 05:47:38.917639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aed10 is same with the state(6) to be set 00:22:51.138 [2024-12-10 05:47:38.917649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202b2d0 (9): Bad file descriptor 00:22:51.138 [2024-12-10 05:47:38.917658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2462350 (9): Bad file descriptor 00:22:51.138 [2024-12-10 05:47:38.917666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:51.138 [2024-12-10 05:47:38.917672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:51.138 [2024-12-10 05:47:38.917680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:51.138 [2024-12-10 05:47:38.917688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:51.138 [2024-12-10 05:47:38.917697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:51.138 [2024-12-10 05:47:38.917703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:51.138 [2024-12-10 05:47:38.917709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:51.138 [2024-12-10 05:47:38.917715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:51.138 [2024-12-10 05:47:38.917723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:51.138 [2024-12-10 05:47:38.917729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:51.138 [2024-12-10 05:47:38.917737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:51.138 [2024-12-10 05:47:38.917744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:51.138 [2024-12-10 05:47:38.917806] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:51.138 [2024-12-10 05:47:38.917819] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:51.138 [2024-12-10 05:47:38.918103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24aed10 (9): Bad file descriptor 00:22:51.138 [2024-12-10 05:47:38.918115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:51.138 [2024-12-10 05:47:38.918122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:51.138 [2024-12-10 05:47:38.918128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:51.138 [2024-12-10 05:47:38.918135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:51.138 [2024-12-10 05:47:38.918142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:51.138 [2024-12-10 05:47:38.918148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:51.138 [2024-12-10 05:47:38.918154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:51.138 [2024-12-10 05:47:38.918161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:51.138 [2024-12-10 05:47:38.918422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:51.138 [2024-12-10 05:47:38.918437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:51.138 [2024-12-10 05:47:38.918449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:51.138 [2024-12-10 05:47:38.918458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:51.138 [2024-12-10 05:47:38.918467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:51.138 [2024-12-10 05:47:38.918475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:51.138 [2024-12-10 05:47:38.918483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:51.138 [2024-12-10 05:47:38.918527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:51.138 [2024-12-10 05:47:38.918535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:51.138 [2024-12-10 05:47:38.918542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:51.139 [2024-12-10 05:47:38.918551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:51.139 [2024-12-10 05:47:38.918822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.139 [2024-12-10 05:47:38.918837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c1a0 with addr=10.0.0.2, port=4420 00:22:51.139 [2024-12-10 05:47:38.918846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c1a0 is same with the state(6) to be set 00:22:51.139 [2024-12-10 05:47:38.918988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.139 [2024-12-10 05:47:38.918999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202d6c0 with addr=10.0.0.2, port=4420 00:22:51.139 [2024-12-10 05:47:38.919006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d6c0 is same with the state(6) to be set 00:22:51.139 [2024-12-10 05:47:38.919197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.139 [2024-12-10 05:47:38.919208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202b4d0 with addr=10.0.0.2, port=4420 00:22:51.139 [2024-12-10 05:47:38.919215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b4d0 is same with the state(6) to be set 00:22:51.139 [2024-12-10 05:47:38.919358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.139 [2024-12-10 05:47:38.919368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2037490 with addr=10.0.0.2, port=4420 00:22:51.139 [2024-12-10 05:47:38.919375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037490 is same with the state(6) to be set 00:22:51.139 [2024-12-10 05:47:38.919511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.139 [2024-12-10 05:47:38.919521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248ded0 with addr=10.0.0.2, port=4420 00:22:51.139 [2024-12-10 05:47:38.919528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248ded0 is same with the state(6) to be set 00:22:51.139 [2024-12-10 05:47:38.919751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.139 [2024-12-10 05:47:38.919762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24aeb30 with addr=10.0.0.2, port=4420 00:22:51.139 [2024-12-10 05:47:38.919770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aeb30 is same with the state(6) to be set 00:22:51.139 [2024-12-10 05:47:38.919937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.139 [2024-12-10 05:47:38.919947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f4c610 with addr=10.0.0.2, port=4420 00:22:51.139 [2024-12-10 05:47:38.919955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c610 is same with the state(6) to be set 00:22:51.139 [2024-12-10 05:47:38.919989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c1a0 (9): Bad file descriptor 00:22:51.139 [2024-12-10 05:47:38.919999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202d6c0 (9): Bad file descriptor 00:22:51.139 [2024-12-10 05:47:38.920009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202b4d0 (9): Bad file descriptor 00:22:51.139 [2024-12-10 05:47:38.920018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2037490 (9): Bad file descriptor 00:22:51.139 [2024-12-10 05:47:38.920026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248ded0 (9): Bad file descriptor 00:22:51.139 [2024-12-10 05:47:38.920035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24aeb30 (9): Bad file descriptor 00:22:51.139 [2024-12-10 05:47:38.920043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4c610 (9): Bad file descriptor 00:22:51.139 [2024-12-10 05:47:38.920067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:51.139 [2024-12-10 05:47:38.920075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:51.139 [2024-12-10 05:47:38.920082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:51.139 [2024-12-10 05:47:38.920089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:51.139 [2024-12-10 05:47:38.920097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:51.139 [2024-12-10 05:47:38.920104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:51.139 [2024-12-10 05:47:38.920110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:51.139 [2024-12-10 05:47:38.920116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:51.139 [2024-12-10 05:47:38.920123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:51.139 [2024-12-10 05:47:38.920129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:51.139 [2024-12-10 05:47:38.920136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:51.139 [2024-12-10 05:47:38.920141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:51.139 [2024-12-10 05:47:38.920149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:51.139 [2024-12-10 05:47:38.920155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:51.139 [2024-12-10 05:47:38.920162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:51.139 [2024-12-10 05:47:38.920172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:51.139 [2024-12-10 05:47:38.920179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:51.139 [2024-12-10 05:47:38.920185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:51.139 [2024-12-10 05:47:38.920191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:51.139 [2024-12-10 05:47:38.920197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:51.139 [2024-12-10 05:47:38.920203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:51.139 [2024-12-10 05:47:38.920209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:51.139 [2024-12-10 05:47:38.920219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:51.139 [2024-12-10 05:47:38.920225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:51.139 [2024-12-10 05:47:38.920232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:51.139 [2024-12-10 05:47:38.920238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:51.139 [2024-12-10 05:47:38.920244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:51.139 [2024-12-10 05:47:38.920251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:51.398 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1251590 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1251590 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1251590 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.775 rmmod nvme_tcp 00:22:52.775 rmmod nvme_fabrics 00:22:52.775 rmmod nvme_keyring 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1251407 ']' 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1251407 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1251407 ']' 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1251407 00:22:52.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1251407) - No such process 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1251407 is not found' 00:22:52.775 Process with pid 1251407 is not found 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.775 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:54.682 00:22:54.682 real 0m7.098s 00:22:54.682 user 0m16.183s 00:22:54.682 sys 0m1.293s 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:54.682 ************************************ 00:22:54.682 END TEST nvmf_shutdown_tc3 00:22:54.682 ************************************ 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:54.682 ************************************ 00:22:54.682 START TEST nvmf_shutdown_tc4 00:22:54.682 ************************************ 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.682 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:54.683 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:54.683 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:54.683 Found net devices under 0000:af:00.0: cvl_0_0 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:54.683 Found net devices under 0000:af:00.1: cvl_0_1 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.683 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.943 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.943 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:54.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:22:54.944 00:22:54.944 --- 10.0.0.2 ping statistics --- 00:22:54.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.944 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:22:54.944 00:22:54.944 --- 10.0.0.1 ping statistics --- 00:22:54.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.944 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.944 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1252803 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1252803 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1252803 ']' 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.229 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:55.229 [2024-12-10 05:47:42.909188] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:22:55.229 [2024-12-10 05:47:42.909240] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.229 [2024-12-10 05:47:42.987976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.229 [2024-12-10 05:47:43.028520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.229 [2024-12-10 05:47:43.028558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.229 [2024-12-10 05:47:43.028565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.229 [2024-12-10 05:47:43.028571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.229 [2024-12-10 05:47:43.028576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.229 [2024-12-10 05:47:43.030056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.229 [2024-12-10 05:47:43.030185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.229 [2024-12-10 05:47:43.030258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.229 [2024-12-10 05:47:43.030259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:56.163 [2024-12-10 05:47:43.785791] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.163 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:56.163 Malloc1 00:22:56.163 [2024-12-10 05:47:43.891069] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.163 Malloc2 00:22:56.163 Malloc3 00:22:56.163 Malloc4 00:22:56.163 Malloc5 00:22:56.421 Malloc6 00:22:56.421 Malloc7 00:22:56.421 Malloc8 00:22:56.421 Malloc9 00:22:56.421 Malloc10 00:22:56.421 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.421 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:56.421 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:56.421 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:56.678 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1253100 00:22:56.678 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:56.678 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:56.678 [2024-12-10 05:47:44.394581] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1252803 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1252803 ']' 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1252803 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1252803 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1252803' 00:23:01.949 killing process with pid 1252803 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1252803 00:23:01.949 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1252803 00:23:01.949 [2024-12-10 05:47:49.398050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcae2a0 is same with the state(6) to be set 00:23:01.949 [2024-12-10 05:47:49.398101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcae2a0 is same with the state(6) to be set 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 [2024-12-10 05:47:49.399347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:01.949 starting I/O failed: -6 00:23:01.949 starting I/O failed: -6 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 [2024-12-10 05:47:49.400084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac0710 is same with the state(6) to be set 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 [2024-12-10 05:47:49.400113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac0710 is same with the state(6) to be set 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 [2024-12-10 05:47:49.400123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac0710 is same with the state(6) to be set 00:23:01.949 starting I/O failed: -6 00:23:01.949 [2024-12-10 05:47:49.400131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac0710 is same with the state(6) to be set 00:23:01.949 [2024-12-10 05:47:49.400139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac0710 is same with the state(6) to be set 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 [2024-12-10 05:47:49.400145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac0710 is same with the state(6) to be set 00:23:01.949 [2024-12-10 05:47:49.400152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac0710 is same with the state(6) to be set 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 [2024-12-10 05:47:49.400159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac0710 is same with the state(6) to be set 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 [2024-12-10 05:47:49.400330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.949 starting I/O failed: -6 00:23:01.949 Write completed with error (sct=0, sc=8) 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 [2024-12-10 05:47:49.400721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfd50 is same with the state(6) to be set 00:23:01.950 starting I/O failed: -6 00:23:01.950 [2024-12-10 05:47:49.400748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfd50 is same with tWrite completed with error (sct=0, sc=8) 00:23:01.950 he state(6) to be set 00:23:01.950 starting I/O failed: -6 00:23:01.950 [2024-12-10 05:47:49.400758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfd50 is same with the state(6) to be set 00:23:01.950 [2024-12-10 05:47:49.400766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfd50 is same with the state(6) to be set 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 [2024-12-10 05:47:49.400773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfd50 is same with the state(6) to be set 00:23:01.950 [2024-12-10 05:47:49.400780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfd50 is same with the state(6) to be set 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 [2024-12-10 05:47:49.401078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaec40 is same with the state(6) to be set 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 [2024-12-10 05:47:49.401103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaec40 is same with the state(6) to be set 00:23:01.950 [2024-12-10 05:47:49.401112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaec40 is same with the state(6) to be set 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 [2024-12-10 05:47:49.401120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaec40 is same with the state(6) to be set 00:23:01.950 starting I/O failed: -6 00:23:01.950 [2024-12-10 05:47:49.401126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaec40 is same with the state(6) to be set 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 [2024-12-10 05:47:49.401373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:01.950 [2024-12-10 05:47:49.401427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf110 is same with the state(6) to be set 00:23:01.950 [2024-12-10 05:47:49.401439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf110 is same with the state(6) to be set 00:23:01.950 [2024-12-10 05:47:49.401445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf110 is same with the state(6) to be set 00:23:01.950 [2024-12-10 05:47:49.401452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf110 is same with the state(6) to be set 00:23:01.950 [2024-12-10 05:47:49.401459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf110 is same with the state(6) to be set 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 [2024-12-10 05:47:49.401466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf110 is same with the state(6) to be set 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 [2024-12-10 05:47:49.402078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf5e0 is same with the state(6) to be set 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 [2024-12-10 05:47:49.402100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf5e0 is same with the state(6) to be set 00:23:01.950 starting I/O failed: -6 00:23:01.950 [2024-12-10 05:47:49.402108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf5e0 is same with the state(6) to be set 00:23:01.950 [2024-12-10 05:47:49.402116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf5e0 is same with tWrite completed with error (sct=0, sc=8) 00:23:01.950 he state(6) to be set 00:23:01.950 starting I/O failed: -6 00:23:01.950 [2024-12-10 05:47:49.402129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf5e0 is same with the state(6) to be set 00:23:01.950 [2024-12-10 05:47:49.402136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf5e0 is same with the state(6) to be set 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 [2024-12-10 05:47:49.402142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaf5e0 is same with the state(6) to be set 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 starting I/O failed: -6 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.950 [2024-12-10 05:47:49.402427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcae770 is same with tstarting I/O failed: -6 00:23:01.950 he state(6) to be set 00:23:01.950 [2024-12-10 05:47:49.402441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcae770 is same with the state(6) to be set 00:23:01.950 Write completed with error (sct=0, sc=8) 00:23:01.951 [2024-12-10 05:47:49.402447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcae770 is same with the state(6) to be set 00:23:01.951 starting I/O failed: -6 00:23:01.951 [2024-12-10 05:47:49.402454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcae770 is same with the state(6) to be set 00:23:01.951 [2024-12-10 05:47:49.402462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcae770 is same with the state(6) to be set 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 [2024-12-10 05:47:49.402468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcae770 is same with the state(6) to be set 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 [2024-12-10 05:47:49.403071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:01.951 NVMe io qpair process completion error 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 [2024-12-10 05:47:49.404031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 [2024-12-10 05:47:49.404918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.951 starting I/O failed: -6 00:23:01.951 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 [2024-12-10 05:47:49.405886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 [2024-12-10 05:47:49.407584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:01.952 NVMe io qpair process completion error 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 [2024-12-10 05:47:49.408554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.952 starting I/O failed: -6 00:23:01.952 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 [2024-12-10 05:47:49.409416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 [2024-12-10 05:47:49.410443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.953 starting I/O failed: -6 00:23:01.953 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 [2024-12-10 05:47:49.411971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:01.954 NVMe io qpair process completion error 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 [2024-12-10 05:47:49.412998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 [2024-12-10 05:47:49.413785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:01.954 starting I/O failed: -6 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.954 starting I/O failed: -6 00:23:01.954 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 [2024-12-10 05:47:49.414851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 [2024-12-10 05:47:49.416875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:01.955 NVMe io qpair process completion error 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 [2024-12-10 05:47:49.417886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:01.955 starting I/O failed: -6 00:23:01.955 starting I/O failed: -6 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.955 starting I/O failed: -6 00:23:01.955 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 [2024-12-10 05:47:49.418825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 [2024-12-10 05:47:49.419819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.956 Write completed with error (sct=0, sc=8) 00:23:01.956 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 [2024-12-10 05:47:49.424214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:01.957 NVMe io qpair process completion error 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 [2024-12-10 05:47:49.425114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:01.957 starting I/O failed: -6 00:23:01.957 starting I/O failed: -6 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 [2024-12-10 05:47:49.426070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 [2024-12-10 05:47:49.427122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.957 Write completed with error (sct=0, sc=8) 00:23:01.957 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 [2024-12-10 05:47:49.430454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:01.958 NVMe io qpair process completion error 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 [2024-12-10 05:47:49.431442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.958 starting I/O failed: -6 00:23:01.958 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 [2024-12-10 05:47:49.432357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 [2024-12-10 05:47:49.433353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.959 starting I/O failed: -6 00:23:01.959 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 [2024-12-10 05:47:49.434942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:01.960 NVMe io qpair process completion error 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 [2024-12-10 05:47:49.435920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 [2024-12-10 05:47:49.436816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 [2024-12-10 05:47:49.437855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.960 Write completed with error (sct=0, sc=8) 00:23:01.960 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 [2024-12-10 05:47:49.441163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:01.961 NVMe io qpair process completion error 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.961 Write completed with error (sct=0, sc=8) 00:23:01.961 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 [2024-12-10 05:47:49.448812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.962 starting I/O failed: -6 00:23:01.962 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 [2024-12-10 05:47:49.449731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 [2024-12-10 05:47:49.450746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.963 Write completed with error (sct=0, sc=8) 00:23:01.963 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 Write completed with error (sct=0, sc=8) 00:23:01.964 starting I/O failed: -6 00:23:01.964 [2024-12-10 05:47:49.453063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:01.964 NVMe io qpair process completion error 00:23:01.964 Initializing NVMe Controllers 00:23:01.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:01.964 Controller IO queue size 128, less than required. 00:23:01.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:01.964 Controller IO queue size 128, less than required. 00:23:01.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:01.964 Controller IO queue size 128, less than required. 00:23:01.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:01.964 Controller IO queue size 128, less than required. 00:23:01.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:01.964 Controller IO queue size 128, less than required. 00:23:01.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:01.964 Controller IO queue size 128, less than required. 00:23:01.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:01.964 Controller IO queue size 128, less than required. 00:23:01.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:01.964 Controller IO queue size 128, less than required. 00:23:01.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:01.964 Controller IO queue size 128, less than required. 00:23:01.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.964 Controller IO queue size 128, less than required. 00:23:01.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:01.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:01.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:01.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:01.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:01.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:01.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:01.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:01.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:01.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:01.964 Initialization complete. Launching workers. 00:23:01.964 ======================================================== 00:23:01.964 Latency(us) 00:23:01.964 Device Information : IOPS MiB/s Average min max 00:23:01.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2191.77 94.18 58403.96 864.15 108574.18 00:23:01.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2231.00 95.86 57387.80 737.68 126575.40 00:23:01.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2219.58 95.37 57699.72 699.98 125385.35 00:23:01.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2223.46 95.54 57643.80 657.23 105780.16 00:23:01.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2219.15 95.35 57787.51 955.70 109625.41 00:23:01.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2222.17 95.48 57719.71 657.06 123635.34 00:23:01.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2162.03 92.90 59351.46 472.91 115361.29 00:23:01.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2180.14 93.68 58903.61 960.61 120140.21 00:23:01.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2189.83 94.09 57976.36 640.32 98679.16 00:23:01.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2188.76 94.05 58016.00 703.51 98807.81 00:23:01.964 ======================================================== 00:23:01.964 Total : 22027.89 946.51 58083.61 472.91 126575.40 00:23:01.964 00:23:01.964 [2024-12-10 05:47:49.456089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff1bc0 is same with the state(6) to be set 00:23:01.964 [2024-12-10 05:47:49.456135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff1ef0 is same with the state(6) to be set 00:23:01.964 [2024-12-10 05:47:49.456164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff2740 is same with the state(6) to be set 00:23:01.964 [2024-12-10 05:47:49.456196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff2a70 is same with the state(6) to be set 00:23:01.964 [2024-12-10 05:47:49.456224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff1560 is same with the state(6) to be set 00:23:01.964 [2024-12-10 05:47:49.456252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff2410 is same with the state(6) to be set 00:23:01.964 [2024-12-10 05:47:49.456279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff3ae0 is same with the state(6) to be set 00:23:01.964 [2024-12-10 05:47:49.456306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff1890 is same with the state(6) to be set 00:23:01.964 [2024-12-10 05:47:49.456335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff3900 is same with the state(6) to be set 00:23:01.964 [2024-12-10 05:47:49.456361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff3720 is same with the state(6) to be set 00:23:01.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:01.964 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1253100 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1253100 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1253100 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.901 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:03.160 rmmod nvme_tcp 00:23:03.160 rmmod nvme_fabrics 00:23:03.160 rmmod nvme_keyring 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1252803 ']' 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1252803 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1252803 ']' 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1252803 00:23:03.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1252803) - No such process 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1252803 is not found' 00:23:03.160 Process with pid 1252803 is not found 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.160 05:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.065 05:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.065 00:23:05.065 real 0m10.455s 00:23:05.065 user 0m27.553s 00:23:05.065 sys 0m5.201s 00:23:05.065 05:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.065 05:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.065 ************************************ 00:23:05.065 END TEST nvmf_shutdown_tc4 00:23:05.065 ************************************ 00:23:05.323 05:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:05.323 00:23:05.323 real 0m40.230s 00:23:05.323 user 1m36.866s 00:23:05.323 sys 0m14.033s 00:23:05.323 05:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.323 05:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:05.323 ************************************ 00:23:05.323 END TEST nvmf_shutdown 00:23:05.323 ************************************ 00:23:05.323 05:47:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:05.323 05:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.323 05:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.323 05:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:05.324 ************************************ 00:23:05.324 START TEST nvmf_nsid 00:23:05.324 ************************************ 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:05.324 * Looking for test storage... 00:23:05.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:05.324 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:05.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.583 --rc genhtml_branch_coverage=1 00:23:05.583 --rc genhtml_function_coverage=1 00:23:05.583 --rc genhtml_legend=1 00:23:05.583 --rc geninfo_all_blocks=1 00:23:05.583 --rc geninfo_unexecuted_blocks=1 00:23:05.583 00:23:05.583 ' 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:05.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.583 --rc genhtml_branch_coverage=1 00:23:05.583 --rc genhtml_function_coverage=1 00:23:05.583 --rc genhtml_legend=1 00:23:05.583 --rc geninfo_all_blocks=1 00:23:05.583 --rc geninfo_unexecuted_blocks=1 00:23:05.583 00:23:05.583 ' 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:05.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.583 --rc genhtml_branch_coverage=1 00:23:05.583 --rc genhtml_function_coverage=1 00:23:05.583 --rc genhtml_legend=1 00:23:05.583 --rc geninfo_all_blocks=1 00:23:05.583 --rc geninfo_unexecuted_blocks=1 00:23:05.583 00:23:05.583 ' 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:05.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.583 --rc genhtml_branch_coverage=1 00:23:05.583 --rc genhtml_function_coverage=1 00:23:05.583 --rc genhtml_legend=1 00:23:05.583 --rc geninfo_all_blocks=1 00:23:05.583 --rc geninfo_unexecuted_blocks=1 00:23:05.583 00:23:05.583 ' 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.583 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.584 05:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.152 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:12.153 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:12.153 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:12.153 Found net devices under 0000:af:00.0: cvl_0_0 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:12.153 Found net devices under 0000:af:00.1: cvl_0_1 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:12.153 05:47:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:23:12.153 00:23:12.153 --- 10.0.0.2 ping statistics --- 00:23:12.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.153 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:23:12.153 00:23:12.153 --- 10.0.0.1 ping statistics --- 00:23:12.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.153 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1257480 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1257480 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1257480 ']' 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.153 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:12.154 [2024-12-10 05:47:59.202707] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:23:12.154 [2024-12-10 05:47:59.202759] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.154 [2024-12-10 05:47:59.278415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.154 [2024-12-10 05:47:59.318424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.154 [2024-12-10 05:47:59.318459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.154 [2024-12-10 05:47:59.318467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.154 [2024-12-10 05:47:59.318473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.154 [2024-12-10 05:47:59.318478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.154 [2024-12-10 05:47:59.318961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1257509 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=62ad837c-0b9c-42fb-b196-8d724c2c586e 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=521ed9ab-1d7c-47ac-8d9a-17043caa854e 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=991c6e01-061f-412c-aace-b7a221adbd26 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:12.154 null0 00:23:12.154 null1 00:23:12.154 [2024-12-10 05:47:59.497387] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:23:12.154 [2024-12-10 05:47:59.497431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257509 ] 00:23:12.154 null2 00:23:12.154 [2024-12-10 05:47:59.502362] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.154 [2024-12-10 05:47:59.526557] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.154 [2024-12-10 05:47:59.553701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1257509 /var/tmp/tgt2.sock 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1257509 ']' 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:12.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:12.154 [2024-12-10 05:47:59.596728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:12.154 05:47:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:12.413 [2024-12-10 05:48:00.119726] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.413 [2024-12-10 05:48:00.135826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:12.413 nvme0n1 nvme0n2 00:23:12.413 nvme1n1 00:23:12.413 05:48:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:12.413 05:48:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:12.413 05:48:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:13.788 05:48:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 62ad837c-0b9c-42fb-b196-8d724c2c586e 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=62ad837c0b9c42fbb1968d724c2c586e 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 62AD837C0B9C42FBB1968D724C2C586E 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 62AD837C0B9C42FBB1968D724C2C586E == \6\2\A\D\8\3\7\C\0\B\9\C\4\2\F\B\B\1\9\6\8\D\7\2\4\C\2\C\5\8\6\E ]] 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 521ed9ab-1d7c-47ac-8d9a-17043caa854e 00:23:14.723 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=521ed9ab1d7c47ac8d9a17043caa854e 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 521ED9AB1D7C47AC8D9A17043CAA854E 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 521ED9AB1D7C47AC8D9A17043CAA854E == \5\2\1\E\D\9\A\B\1\D\7\C\4\7\A\C\8\D\9\A\1\7\0\4\3\C\A\A\8\5\4\E ]] 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 991c6e01-061f-412c-aace-b7a221adbd26 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=991c6e01061f412caaceb7a221adbd26 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 991C6E01061F412CAACEB7A221ADBD26 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 991C6E01061F412CAACEB7A221ADBD26 == \9\9\1\C\6\E\0\1\0\6\1\F\4\1\2\C\A\A\C\E\B\7\A\2\2\1\A\D\B\D\2\6 ]] 00:23:14.724 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1257509 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1257509 ']' 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1257509 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1257509 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1257509' 00:23:14.983 killing process with pid 1257509 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1257509 00:23:14.983 05:48:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1257509 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.242 rmmod nvme_tcp 00:23:15.242 rmmod nvme_fabrics 00:23:15.242 rmmod nvme_keyring 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1257480 ']' 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1257480 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1257480 ']' 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1257480 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.242 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1257480 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1257480' 00:23:15.501 killing process with pid 1257480 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1257480 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1257480 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.501 05:48:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.038 05:48:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.038 00:23:18.038 real 0m12.339s 00:23:18.038 user 0m9.610s 00:23:18.038 sys 0m5.482s 00:23:18.038 05:48:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.038 05:48:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:18.038 ************************************ 00:23:18.038 END TEST nvmf_nsid 00:23:18.038 ************************************ 00:23:18.038 05:48:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:18.038 00:23:18.038 real 11m55.730s 00:23:18.038 user 25m19.436s 00:23:18.038 sys 3m42.812s 00:23:18.038 05:48:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.038 05:48:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:18.038 ************************************ 00:23:18.038 END TEST nvmf_target_extra 00:23:18.038 ************************************ 00:23:18.038 05:48:05 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:18.038 05:48:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.038 05:48:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.038 05:48:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.038 ************************************ 00:23:18.038 START TEST nvmf_host 00:23:18.038 ************************************ 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:18.038 * Looking for test storage... 00:23:18.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.038 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:18.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.039 --rc genhtml_branch_coverage=1 00:23:18.039 --rc genhtml_function_coverage=1 00:23:18.039 --rc genhtml_legend=1 00:23:18.039 --rc geninfo_all_blocks=1 00:23:18.039 --rc geninfo_unexecuted_blocks=1 00:23:18.039 00:23:18.039 ' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:18.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.039 --rc genhtml_branch_coverage=1 00:23:18.039 --rc genhtml_function_coverage=1 00:23:18.039 --rc genhtml_legend=1 00:23:18.039 --rc geninfo_all_blocks=1 00:23:18.039 --rc geninfo_unexecuted_blocks=1 00:23:18.039 00:23:18.039 ' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:18.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.039 --rc genhtml_branch_coverage=1 00:23:18.039 --rc genhtml_function_coverage=1 00:23:18.039 --rc genhtml_legend=1 00:23:18.039 --rc geninfo_all_blocks=1 00:23:18.039 --rc geninfo_unexecuted_blocks=1 00:23:18.039 00:23:18.039 ' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:18.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.039 --rc genhtml_branch_coverage=1 00:23:18.039 --rc genhtml_function_coverage=1 00:23:18.039 --rc genhtml_legend=1 00:23:18.039 --rc geninfo_all_blocks=1 00:23:18.039 --rc geninfo_unexecuted_blocks=1 00:23:18.039 00:23:18.039 ' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.039 ************************************ 00:23:18.039 START TEST nvmf_multicontroller 00:23:18.039 ************************************ 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:18.039 * Looking for test storage... 00:23:18.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:18.039 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.040 --rc genhtml_branch_coverage=1 00:23:18.040 --rc genhtml_function_coverage=1 00:23:18.040 --rc genhtml_legend=1 00:23:18.040 --rc geninfo_all_blocks=1 00:23:18.040 --rc geninfo_unexecuted_blocks=1 00:23:18.040 00:23:18.040 ' 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.040 --rc genhtml_branch_coverage=1 00:23:18.040 --rc genhtml_function_coverage=1 00:23:18.040 --rc genhtml_legend=1 00:23:18.040 --rc geninfo_all_blocks=1 00:23:18.040 --rc geninfo_unexecuted_blocks=1 00:23:18.040 00:23:18.040 ' 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.040 --rc genhtml_branch_coverage=1 00:23:18.040 --rc genhtml_function_coverage=1 00:23:18.040 --rc genhtml_legend=1 00:23:18.040 --rc geninfo_all_blocks=1 00:23:18.040 --rc geninfo_unexecuted_blocks=1 00:23:18.040 00:23:18.040 ' 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.040 --rc genhtml_branch_coverage=1 00:23:18.040 --rc genhtml_function_coverage=1 00:23:18.040 --rc genhtml_legend=1 00:23:18.040 --rc geninfo_all_blocks=1 00:23:18.040 --rc geninfo_unexecuted_blocks=1 00:23:18.040 00:23:18.040 ' 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.040 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.299 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.299 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.299 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.299 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:18.299 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:18.299 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.299 05:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:24.870 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:24.870 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:24.870 Found net devices under 0000:af:00.0: cvl_0_0 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:24.870 Found net devices under 0000:af:00.1: cvl_0_1 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.870 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:24.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:23:24.871 00:23:24.871 --- 10.0.0.2 ping statistics --- 00:23:24.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.871 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:23:24.871 00:23:24.871 --- 10.0.0.1 ping statistics --- 00:23:24.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.871 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1262258 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1262258 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1262258 ']' 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.871 05:48:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 [2024-12-10 05:48:11.978086] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:23:24.871 [2024-12-10 05:48:11.978137] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.871 [2024-12-10 05:48:12.055822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:24.871 [2024-12-10 05:48:12.097604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.871 [2024-12-10 05:48:12.097637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.871 [2024-12-10 05:48:12.097644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.871 [2024-12-10 05:48:12.097650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.871 [2024-12-10 05:48:12.097655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.871 [2024-12-10 05:48:12.098948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.871 [2024-12-10 05:48:12.099059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.871 [2024-12-10 05:48:12.099060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 [2024-12-10 05:48:12.239908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 Malloc0 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 [2024-12-10 05:48:12.304599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 [2024-12-10 05:48:12.312544] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 Malloc1 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.871 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1262423 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1262423 /var/tmp/bdevperf.sock 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1262423 ']' 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.872 NVMe0n1 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.872 1 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.872 request: 00:23:24.872 { 00:23:24.872 "name": "NVMe0", 00:23:24.872 "trtype": "tcp", 00:23:24.872 "traddr": "10.0.0.2", 00:23:24.872 "adrfam": "ipv4", 00:23:24.872 "trsvcid": "4420", 00:23:24.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.872 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:24.872 "hostaddr": "10.0.0.1", 00:23:24.872 "prchk_reftag": false, 00:23:24.872 "prchk_guard": false, 00:23:24.872 "hdgst": false, 00:23:24.872 "ddgst": false, 00:23:24.872 "allow_unrecognized_csi": false, 00:23:24.872 "method": "bdev_nvme_attach_controller", 00:23:24.872 "req_id": 1 00:23:24.872 } 00:23:24.872 Got JSON-RPC error response 00:23:24.872 response: 00:23:24.872 { 00:23:24.872 "code": -114, 00:23:24.872 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:24.872 } 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.872 request: 00:23:24.872 { 00:23:24.872 "name": "NVMe0", 00:23:24.872 "trtype": "tcp", 00:23:24.872 "traddr": "10.0.0.2", 00:23:24.872 "adrfam": "ipv4", 00:23:24.872 "trsvcid": "4420", 00:23:24.872 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:24.872 "hostaddr": "10.0.0.1", 00:23:24.872 "prchk_reftag": false, 00:23:24.872 "prchk_guard": false, 00:23:24.872 "hdgst": false, 00:23:24.872 "ddgst": false, 00:23:24.872 "allow_unrecognized_csi": false, 00:23:24.872 "method": "bdev_nvme_attach_controller", 00:23:24.872 "req_id": 1 00:23:24.872 } 00:23:24.872 Got JSON-RPC error response 00:23:24.872 response: 00:23:24.872 { 00:23:24.872 "code": -114, 00:23:24.872 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:24.872 } 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.872 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.131 request: 00:23:25.131 { 00:23:25.131 "name": "NVMe0", 00:23:25.131 "trtype": "tcp", 00:23:25.131 "traddr": "10.0.0.2", 00:23:25.131 "adrfam": "ipv4", 00:23:25.131 "trsvcid": "4420", 00:23:25.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.131 "hostaddr": "10.0.0.1", 00:23:25.131 "prchk_reftag": false, 00:23:25.131 "prchk_guard": false, 00:23:25.131 "hdgst": false, 00:23:25.131 "ddgst": false, 00:23:25.131 "multipath": "disable", 00:23:25.131 "allow_unrecognized_csi": false, 00:23:25.131 "method": "bdev_nvme_attach_controller", 00:23:25.131 "req_id": 1 00:23:25.131 } 00:23:25.131 Got JSON-RPC error response 00:23:25.131 response: 00:23:25.131 { 00:23:25.131 "code": -114, 00:23:25.131 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:25.131 } 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.131 request: 00:23:25.131 { 00:23:25.131 "name": "NVMe0", 00:23:25.131 "trtype": "tcp", 00:23:25.131 "traddr": "10.0.0.2", 00:23:25.131 "adrfam": "ipv4", 00:23:25.131 "trsvcid": "4420", 00:23:25.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.131 "hostaddr": "10.0.0.1", 00:23:25.131 "prchk_reftag": false, 00:23:25.131 "prchk_guard": false, 00:23:25.131 "hdgst": false, 00:23:25.131 "ddgst": false, 00:23:25.131 "multipath": "failover", 00:23:25.131 "allow_unrecognized_csi": false, 00:23:25.131 "method": "bdev_nvme_attach_controller", 00:23:25.131 "req_id": 1 00:23:25.131 } 00:23:25.131 Got JSON-RPC error response 00:23:25.131 response: 00:23:25.131 { 00:23:25.131 "code": -114, 00:23:25.131 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:25.131 } 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.131 NVMe0n1 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.131 05:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.390 00:23:25.390 05:48:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.390 05:48:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.390 05:48:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:25.390 05:48:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.390 05:48:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.390 05:48:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.390 05:48:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:25.390 05:48:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:26.325 { 00:23:26.325 "results": [ 00:23:26.325 { 00:23:26.325 "job": "NVMe0n1", 00:23:26.325 "core_mask": "0x1", 00:23:26.325 "workload": "write", 00:23:26.325 "status": "finished", 00:23:26.325 "queue_depth": 128, 00:23:26.325 "io_size": 4096, 00:23:26.325 "runtime": 1.003095, 00:23:26.325 "iops": 25139.194193969663, 00:23:26.325 "mibps": 98.199977320194, 00:23:26.325 "io_failed": 0, 00:23:26.325 "io_timeout": 0, 00:23:26.325 "avg_latency_us": 5084.908213015785, 00:23:26.325 "min_latency_us": 3105.158095238095, 00:23:26.325 "max_latency_us": 15603.809523809523 00:23:26.325 } 00:23:26.325 ], 00:23:26.325 "core_count": 1 00:23:26.325 } 00:23:26.325 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:26.325 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.325 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.583 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.583 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:26.583 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1262423 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1262423 ']' 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1262423 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1262423 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1262423' 00:23:26.584 killing process with pid 1262423 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1262423 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1262423 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:26.584 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.584 [2024-12-10 05:48:12.418514] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:23:26.584 [2024-12-10 05:48:12.418564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262423 ] 00:23:26.584 [2024-12-10 05:48:12.492319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.584 [2024-12-10 05:48:12.533269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.584 [2024-12-10 05:48:13.068069] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name ac3a65c1-310e-4c10-a668-aafd06208d6b already exists 00:23:26.584 [2024-12-10 05:48:13.068096] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:ac3a65c1-310e-4c10-a668-aafd06208d6b alias for bdev NVMe1n1 00:23:26.584 [2024-12-10 05:48:13.068103] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:26.584 Running I/O for 1 seconds... 00:23:26.584 25089.00 IOPS, 98.00 MiB/s 00:23:26.584 Latency(us) 00:23:26.584 [2024-12-10T04:48:14.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.584 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:26.584 NVMe0n1 : 1.00 25139.19 98.20 0.00 0.00 5084.91 3105.16 15603.81 00:23:26.584 [2024-12-10T04:48:14.480Z] =================================================================================================================== 00:23:26.584 [2024-12-10T04:48:14.480Z] Total : 25139.19 98.20 0.00 0.00 5084.91 3105.16 15603.81 00:23:26.584 Received shutdown signal, test time was about 1.000000 seconds 00:23:26.584 00:23:26.584 Latency(us) 00:23:26.584 [2024-12-10T04:48:14.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.584 [2024-12-10T04:48:14.480Z] =================================================================================================================== 00:23:26.584 [2024-12-10T04:48:14.480Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.584 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.584 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.842 rmmod nvme_tcp 00:23:26.842 rmmod nvme_fabrics 00:23:26.842 rmmod nvme_keyring 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1262258 ']' 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1262258 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1262258 ']' 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1262258 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1262258 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1262258' 00:23:26.842 killing process with pid 1262258 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1262258 00:23:26.842 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1262258 00:23:27.101 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.101 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.101 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.101 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:27.101 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:27.101 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.101 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.102 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.102 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.102 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.102 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.102 05:48:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.013 05:48:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.013 00:23:29.013 real 0m11.142s 00:23:29.013 user 0m12.011s 00:23:29.013 sys 0m5.134s 00:23:29.013 05:48:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.013 05:48:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.013 ************************************ 00:23:29.013 END TEST nvmf_multicontroller 00:23:29.013 ************************************ 00:23:29.013 05:48:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.013 05:48:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.013 05:48:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.013 05:48:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.272 ************************************ 00:23:29.272 START TEST nvmf_aer 00:23:29.272 ************************************ 00:23:29.272 05:48:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.272 * Looking for test storage... 00:23:29.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:29.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.272 --rc genhtml_branch_coverage=1 00:23:29.272 --rc genhtml_function_coverage=1 00:23:29.272 --rc genhtml_legend=1 00:23:29.272 --rc geninfo_all_blocks=1 00:23:29.272 --rc geninfo_unexecuted_blocks=1 00:23:29.272 00:23:29.272 ' 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:29.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.272 --rc genhtml_branch_coverage=1 00:23:29.272 --rc genhtml_function_coverage=1 00:23:29.272 --rc genhtml_legend=1 00:23:29.272 --rc geninfo_all_blocks=1 00:23:29.272 --rc geninfo_unexecuted_blocks=1 00:23:29.272 00:23:29.272 ' 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:29.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.272 --rc genhtml_branch_coverage=1 00:23:29.272 --rc genhtml_function_coverage=1 00:23:29.272 --rc genhtml_legend=1 00:23:29.272 --rc geninfo_all_blocks=1 00:23:29.272 --rc geninfo_unexecuted_blocks=1 00:23:29.272 00:23:29.272 ' 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:29.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.272 --rc genhtml_branch_coverage=1 00:23:29.272 --rc genhtml_function_coverage=1 00:23:29.272 --rc genhtml_legend=1 00:23:29.272 --rc geninfo_all_blocks=1 00:23:29.272 --rc geninfo_unexecuted_blocks=1 00:23:29.272 00:23:29.272 ' 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.272 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.273 05:48:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.951 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:35.952 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:35.952 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:35.952 Found net devices under 0000:af:00.0: cvl_0_0 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:35.952 Found net devices under 0000:af:00.1: cvl_0_1 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:35.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:23:35.952 00:23:35.952 --- 10.0.0.2 ping statistics --- 00:23:35.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.952 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:23:35.952 05:48:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:23:35.952 00:23:35.952 --- 10.0.0.1 ping statistics --- 00:23:35.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.952 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1266206 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1266206 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1266206 ']' 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.952 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.952 [2024-12-10 05:48:23.097605] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:23:35.952 [2024-12-10 05:48:23.097650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.952 [2024-12-10 05:48:23.175889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.952 [2024-12-10 05:48:23.217022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.952 [2024-12-10 05:48:23.217059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.952 [2024-12-10 05:48:23.217066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.952 [2024-12-10 05:48:23.217071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.952 [2024-12-10 05:48:23.217077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.952 [2024-12-10 05:48:23.218519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.952 [2024-12-10 05:48:23.218627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.952 [2024-12-10 05:48:23.218734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.952 [2024-12-10 05:48:23.218735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 [2024-12-10 05:48:23.356320] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 Malloc0 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 [2024-12-10 05:48:23.423834] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 [ 00:23:35.953 { 00:23:35.953 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:35.953 "subtype": "Discovery", 00:23:35.953 "listen_addresses": [], 00:23:35.953 "allow_any_host": true, 00:23:35.953 "hosts": [] 00:23:35.953 }, 00:23:35.953 { 00:23:35.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.953 "subtype": "NVMe", 00:23:35.953 "listen_addresses": [ 00:23:35.953 { 00:23:35.953 "trtype": "TCP", 00:23:35.953 "adrfam": "IPv4", 00:23:35.953 "traddr": "10.0.0.2", 00:23:35.953 "trsvcid": "4420" 00:23:35.953 } 00:23:35.953 ], 00:23:35.953 "allow_any_host": true, 00:23:35.953 "hosts": [], 00:23:35.953 "serial_number": "SPDK00000000000001", 00:23:35.953 "model_number": "SPDK bdev Controller", 00:23:35.953 "max_namespaces": 2, 00:23:35.953 "min_cntlid": 1, 00:23:35.953 "max_cntlid": 65519, 00:23:35.953 "namespaces": [ 00:23:35.953 { 00:23:35.953 "nsid": 1, 00:23:35.953 "bdev_name": "Malloc0", 00:23:35.953 "name": "Malloc0", 00:23:35.953 "nguid": "5AF14831405D4644BA3136B9FA4BB5B1", 00:23:35.953 "uuid": "5af14831-405d-4644-ba31-36b9fa4bb5b1" 00:23:35.953 } 00:23:35.953 ] 00:23:35.953 } 00:23:35.953 ] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1266248 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 Malloc1 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 [ 00:23:35.953 { 00:23:35.953 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:35.953 "subtype": "Discovery", 00:23:35.953 "listen_addresses": [], 00:23:35.953 "allow_any_host": true, 00:23:35.953 "hosts": [] 00:23:35.953 }, 00:23:35.953 { 00:23:35.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.953 "subtype": "NVMe", 00:23:35.953 "listen_addresses": [ 00:23:35.953 { 00:23:35.953 "trtype": "TCP", 00:23:35.953 "adrfam": "IPv4", 00:23:35.953 "traddr": "10.0.0.2", 00:23:35.953 "trsvcid": "4420" 00:23:35.953 } 00:23:35.953 ], 00:23:35.953 "allow_any_host": true, 00:23:35.953 "hosts": [], 00:23:35.953 "serial_number": "SPDK00000000000001", 00:23:35.953 "model_number": "SPDK bdev Controller", 00:23:35.953 "max_namespaces": 2, 00:23:35.953 "min_cntlid": 1, 00:23:35.953 "max_cntlid": 65519, 00:23:35.953 "namespaces": [ 00:23:35.953 { 00:23:35.953 "nsid": 1, 00:23:35.953 "bdev_name": "Malloc0", 00:23:35.953 "name": "Malloc0", 00:23:35.953 "nguid": "5AF14831405D4644BA3136B9FA4BB5B1", 00:23:35.953 "uuid": "5af14831-405d-4644-ba31-36b9fa4bb5b1" 00:23:35.953 }, 00:23:35.953 { 00:23:35.953 "nsid": 2, 00:23:35.953 "bdev_name": "Malloc1", 00:23:35.953 "name": "Malloc1", 00:23:35.953 "nguid": "1F268102A71C4493A04D00B732A5978A", 00:23:35.953 Asynchronous Event Request test 00:23:35.953 Attaching to 10.0.0.2 00:23:35.953 Attached to 10.0.0.2 00:23:35.953 Registering asynchronous event callbacks... 00:23:35.953 Starting namespace attribute notice tests for all controllers... 00:23:35.953 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:35.953 aer_cb - Changed Namespace 00:23:35.953 Cleaning up... 00:23:35.953 "uuid": "1f268102-a71c-4493-a04d-00b732a5978a" 00:23:35.953 } 00:23:35.953 ] 00:23:35.953 } 00:23:35.953 ] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1266248 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:35.953 rmmod nvme_tcp 00:23:35.953 rmmod nvme_fabrics 00:23:35.953 rmmod nvme_keyring 00:23:35.953 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1266206 ']' 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1266206 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1266206 ']' 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1266206 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1266206 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1266206' 00:23:36.213 killing process with pid 1266206 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1266206 00:23:36.213 05:48:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1266206 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.213 05:48:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:38.746 00:23:38.746 real 0m9.203s 00:23:38.746 user 0m5.182s 00:23:38.746 sys 0m4.753s 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.746 ************************************ 00:23:38.746 END TEST nvmf_aer 00:23:38.746 ************************************ 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.746 ************************************ 00:23:38.746 START TEST nvmf_async_init 00:23:38.746 ************************************ 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:38.746 * Looking for test storage... 00:23:38.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.746 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:38.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.746 --rc genhtml_branch_coverage=1 00:23:38.746 --rc genhtml_function_coverage=1 00:23:38.746 --rc genhtml_legend=1 00:23:38.747 --rc geninfo_all_blocks=1 00:23:38.747 --rc geninfo_unexecuted_blocks=1 00:23:38.747 00:23:38.747 ' 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:38.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.747 --rc genhtml_branch_coverage=1 00:23:38.747 --rc genhtml_function_coverage=1 00:23:38.747 --rc genhtml_legend=1 00:23:38.747 --rc geninfo_all_blocks=1 00:23:38.747 --rc geninfo_unexecuted_blocks=1 00:23:38.747 00:23:38.747 ' 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:38.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.747 --rc genhtml_branch_coverage=1 00:23:38.747 --rc genhtml_function_coverage=1 00:23:38.747 --rc genhtml_legend=1 00:23:38.747 --rc geninfo_all_blocks=1 00:23:38.747 --rc geninfo_unexecuted_blocks=1 00:23:38.747 00:23:38.747 ' 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:38.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.747 --rc genhtml_branch_coverage=1 00:23:38.747 --rc genhtml_function_coverage=1 00:23:38.747 --rc genhtml_legend=1 00:23:38.747 --rc geninfo_all_blocks=1 00:23:38.747 --rc geninfo_unexecuted_blocks=1 00:23:38.747 00:23:38.747 ' 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:38.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=027906f401d244e29153da30a0c0a56f 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.747 05:48:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:45.317 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:45.317 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:45.318 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:45.318 Found net devices under 0000:af:00.0: cvl_0_0 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:45.318 Found net devices under 0000:af:00.1: cvl_0_1 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:45.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:23:45.318 00:23:45.318 --- 10.0.0.2 ping statistics --- 00:23:45.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.318 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:23:45.318 00:23:45.318 --- 10.0.0.1 ping statistics --- 00:23:45.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.318 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1269914 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1269914 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1269914 ']' 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.318 [2024-12-10 05:48:32.383862] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:23:45.318 [2024-12-10 05:48:32.383906] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.318 [2024-12-10 05:48:32.460459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.318 [2024-12-10 05:48:32.499475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.318 [2024-12-10 05:48:32.499510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.318 [2024-12-10 05:48:32.499517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.318 [2024-12-10 05:48:32.499523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.318 [2024-12-10 05:48:32.499528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.318 [2024-12-10 05:48:32.500007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.318 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.319 [2024-12-10 05:48:32.630770] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.319 null0 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 027906f401d244e29153da30a0c0a56f 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.319 [2024-12-10 05:48:32.683043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.319 nvme0n1 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.319 [ 00:23:45.319 { 00:23:45.319 "name": "nvme0n1", 00:23:45.319 "aliases": [ 00:23:45.319 "027906f4-01d2-44e2-9153-da30a0c0a56f" 00:23:45.319 ], 00:23:45.319 "product_name": "NVMe disk", 00:23:45.319 "block_size": 512, 00:23:45.319 "num_blocks": 2097152, 00:23:45.319 "uuid": "027906f4-01d2-44e2-9153-da30a0c0a56f", 00:23:45.319 "numa_id": 1, 00:23:45.319 "assigned_rate_limits": { 00:23:45.319 "rw_ios_per_sec": 0, 00:23:45.319 "rw_mbytes_per_sec": 0, 00:23:45.319 "r_mbytes_per_sec": 0, 00:23:45.319 "w_mbytes_per_sec": 0 00:23:45.319 }, 00:23:45.319 "claimed": false, 00:23:45.319 "zoned": false, 00:23:45.319 "supported_io_types": { 00:23:45.319 "read": true, 00:23:45.319 "write": true, 00:23:45.319 "unmap": false, 00:23:45.319 "flush": true, 00:23:45.319 "reset": true, 00:23:45.319 "nvme_admin": true, 00:23:45.319 "nvme_io": true, 00:23:45.319 "nvme_io_md": false, 00:23:45.319 "write_zeroes": true, 00:23:45.319 "zcopy": false, 00:23:45.319 "get_zone_info": false, 00:23:45.319 "zone_management": false, 00:23:45.319 "zone_append": false, 00:23:45.319 "compare": true, 00:23:45.319 "compare_and_write": true, 00:23:45.319 "abort": true, 00:23:45.319 "seek_hole": false, 00:23:45.319 "seek_data": false, 00:23:45.319 "copy": true, 00:23:45.319 "nvme_iov_md": false 00:23:45.319 }, 00:23:45.319 "memory_domains": [ 00:23:45.319 { 00:23:45.319 "dma_device_id": "system", 00:23:45.319 "dma_device_type": 1 00:23:45.319 } 00:23:45.319 ], 00:23:45.319 "driver_specific": { 00:23:45.319 "nvme": [ 00:23:45.319 { 00:23:45.319 "trid": { 00:23:45.319 "trtype": "TCP", 00:23:45.319 "adrfam": "IPv4", 00:23:45.319 "traddr": "10.0.0.2", 00:23:45.319 "trsvcid": "4420", 00:23:45.319 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:45.319 }, 00:23:45.319 "ctrlr_data": { 00:23:45.319 "cntlid": 1, 00:23:45.319 "vendor_id": "0x8086", 00:23:45.319 "model_number": "SPDK bdev Controller", 00:23:45.319 "serial_number": "00000000000000000000", 00:23:45.319 "firmware_revision": "25.01", 00:23:45.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.319 "oacs": { 00:23:45.319 "security": 0, 00:23:45.319 "format": 0, 00:23:45.319 "firmware": 0, 00:23:45.319 "ns_manage": 0 00:23:45.319 }, 00:23:45.319 "multi_ctrlr": true, 00:23:45.319 "ana_reporting": false 00:23:45.319 }, 00:23:45.319 "vs": { 00:23:45.319 "nvme_version": "1.3" 00:23:45.319 }, 00:23:45.319 "ns_data": { 00:23:45.319 "id": 1, 00:23:45.319 "can_share": true 00:23:45.319 } 00:23:45.319 } 00:23:45.319 ], 00:23:45.319 "mp_policy": "active_passive" 00:23:45.319 } 00:23:45.319 } 00:23:45.319 ] 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.319 05:48:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.319 [2024-12-10 05:48:32.947580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.319 [2024-12-10 05:48:32.947634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbb250 (9): Bad file descriptor 00:23:45.319 [2024-12-10 05:48:33.079237] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.319 [ 00:23:45.319 { 00:23:45.319 "name": "nvme0n1", 00:23:45.319 "aliases": [ 00:23:45.319 "027906f4-01d2-44e2-9153-da30a0c0a56f" 00:23:45.319 ], 00:23:45.319 "product_name": "NVMe disk", 00:23:45.319 "block_size": 512, 00:23:45.319 "num_blocks": 2097152, 00:23:45.319 "uuid": "027906f4-01d2-44e2-9153-da30a0c0a56f", 00:23:45.319 "numa_id": 1, 00:23:45.319 "assigned_rate_limits": { 00:23:45.319 "rw_ios_per_sec": 0, 00:23:45.319 "rw_mbytes_per_sec": 0, 00:23:45.319 "r_mbytes_per_sec": 0, 00:23:45.319 "w_mbytes_per_sec": 0 00:23:45.319 }, 00:23:45.319 "claimed": false, 00:23:45.319 "zoned": false, 00:23:45.319 "supported_io_types": { 00:23:45.319 "read": true, 00:23:45.319 "write": true, 00:23:45.319 "unmap": false, 00:23:45.319 "flush": true, 00:23:45.319 "reset": true, 00:23:45.319 "nvme_admin": true, 00:23:45.319 "nvme_io": true, 00:23:45.319 "nvme_io_md": false, 00:23:45.319 "write_zeroes": true, 00:23:45.319 "zcopy": false, 00:23:45.319 "get_zone_info": false, 00:23:45.319 "zone_management": false, 00:23:45.319 "zone_append": false, 00:23:45.319 "compare": true, 00:23:45.319 "compare_and_write": true, 00:23:45.319 "abort": true, 00:23:45.319 "seek_hole": false, 00:23:45.319 "seek_data": false, 00:23:45.319 "copy": true, 00:23:45.319 "nvme_iov_md": false 00:23:45.319 }, 00:23:45.319 "memory_domains": [ 00:23:45.319 { 00:23:45.319 "dma_device_id": "system", 00:23:45.319 "dma_device_type": 1 00:23:45.319 } 00:23:45.319 ], 00:23:45.319 "driver_specific": { 00:23:45.319 "nvme": [ 00:23:45.319 { 00:23:45.319 "trid": { 00:23:45.319 "trtype": "TCP", 00:23:45.319 "adrfam": "IPv4", 00:23:45.319 "traddr": "10.0.0.2", 00:23:45.319 "trsvcid": "4420", 00:23:45.319 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:45.319 }, 00:23:45.319 "ctrlr_data": { 00:23:45.319 "cntlid": 2, 00:23:45.319 "vendor_id": "0x8086", 00:23:45.319 "model_number": "SPDK bdev Controller", 00:23:45.319 "serial_number": "00000000000000000000", 00:23:45.319 "firmware_revision": "25.01", 00:23:45.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.319 "oacs": { 00:23:45.319 "security": 0, 00:23:45.319 "format": 0, 00:23:45.319 "firmware": 0, 00:23:45.319 "ns_manage": 0 00:23:45.319 }, 00:23:45.319 "multi_ctrlr": true, 00:23:45.319 "ana_reporting": false 00:23:45.319 }, 00:23:45.319 "vs": { 00:23:45.319 "nvme_version": "1.3" 00:23:45.319 }, 00:23:45.319 "ns_data": { 00:23:45.319 "id": 1, 00:23:45.319 "can_share": true 00:23:45.319 } 00:23:45.319 } 00:23:45.319 ], 00:23:45.319 "mp_policy": "active_passive" 00:23:45.319 } 00:23:45.319 } 00:23:45.319 ] 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KlK0OMPyLz 00:23:45.319 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KlK0OMPyLz 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.KlK0OMPyLz 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.320 [2024-12-10 05:48:33.156210] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:45.320 [2024-12-10 05:48:33.156304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.320 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.320 [2024-12-10 05:48:33.176276] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.579 nvme0n1 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.579 [ 00:23:45.579 { 00:23:45.579 "name": "nvme0n1", 00:23:45.579 "aliases": [ 00:23:45.579 "027906f4-01d2-44e2-9153-da30a0c0a56f" 00:23:45.579 ], 00:23:45.579 "product_name": "NVMe disk", 00:23:45.579 "block_size": 512, 00:23:45.579 "num_blocks": 2097152, 00:23:45.579 "uuid": "027906f4-01d2-44e2-9153-da30a0c0a56f", 00:23:45.579 "numa_id": 1, 00:23:45.579 "assigned_rate_limits": { 00:23:45.579 "rw_ios_per_sec": 0, 00:23:45.579 "rw_mbytes_per_sec": 0, 00:23:45.579 "r_mbytes_per_sec": 0, 00:23:45.579 "w_mbytes_per_sec": 0 00:23:45.579 }, 00:23:45.579 "claimed": false, 00:23:45.579 "zoned": false, 00:23:45.579 "supported_io_types": { 00:23:45.579 "read": true, 00:23:45.579 "write": true, 00:23:45.579 "unmap": false, 00:23:45.579 "flush": true, 00:23:45.579 "reset": true, 00:23:45.579 "nvme_admin": true, 00:23:45.579 "nvme_io": true, 00:23:45.579 "nvme_io_md": false, 00:23:45.579 "write_zeroes": true, 00:23:45.579 "zcopy": false, 00:23:45.579 "get_zone_info": false, 00:23:45.579 "zone_management": false, 00:23:45.579 "zone_append": false, 00:23:45.579 "compare": true, 00:23:45.579 "compare_and_write": true, 00:23:45.579 "abort": true, 00:23:45.579 "seek_hole": false, 00:23:45.579 "seek_data": false, 00:23:45.579 "copy": true, 00:23:45.579 "nvme_iov_md": false 00:23:45.579 }, 00:23:45.579 "memory_domains": [ 00:23:45.579 { 00:23:45.579 "dma_device_id": "system", 00:23:45.579 "dma_device_type": 1 00:23:45.579 } 00:23:45.579 ], 00:23:45.579 "driver_specific": { 00:23:45.579 "nvme": [ 00:23:45.579 { 00:23:45.579 "trid": { 00:23:45.579 "trtype": "TCP", 00:23:45.579 "adrfam": "IPv4", 00:23:45.579 "traddr": "10.0.0.2", 00:23:45.579 "trsvcid": "4421", 00:23:45.579 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:45.579 }, 00:23:45.579 "ctrlr_data": { 00:23:45.579 "cntlid": 3, 00:23:45.579 "vendor_id": "0x8086", 00:23:45.579 "model_number": "SPDK bdev Controller", 00:23:45.579 "serial_number": "00000000000000000000", 00:23:45.579 "firmware_revision": "25.01", 00:23:45.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.579 "oacs": { 00:23:45.579 "security": 0, 00:23:45.579 "format": 0, 00:23:45.579 "firmware": 0, 00:23:45.579 "ns_manage": 0 00:23:45.579 }, 00:23:45.579 "multi_ctrlr": true, 00:23:45.579 "ana_reporting": false 00:23:45.579 }, 00:23:45.579 "vs": { 00:23:45.579 "nvme_version": "1.3" 00:23:45.579 }, 00:23:45.579 "ns_data": { 00:23:45.579 "id": 1, 00:23:45.579 "can_share": true 00:23:45.579 } 00:23:45.579 } 00:23:45.579 ], 00:23:45.579 "mp_policy": "active_passive" 00:23:45.579 } 00:23:45.579 } 00:23:45.579 ] 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.KlK0OMPyLz 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:45.579 rmmod nvme_tcp 00:23:45.579 rmmod nvme_fabrics 00:23:45.579 rmmod nvme_keyring 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1269914 ']' 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1269914 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1269914 ']' 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1269914 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1269914 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.579 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.580 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1269914' 00:23:45.580 killing process with pid 1269914 00:23:45.580 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1269914 00:23:45.580 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1269914 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.839 05:48:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.743 05:48:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.743 00:23:47.743 real 0m9.410s 00:23:47.743 user 0m3.075s 00:23:47.743 sys 0m4.761s 00:23:47.743 05:48:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.743 05:48:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.743 ************************************ 00:23:47.743 END TEST nvmf_async_init 00:23:47.743 ************************************ 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.002 ************************************ 00:23:48.002 START TEST dma 00:23:48.002 ************************************ 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:48.002 * Looking for test storage... 00:23:48.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:48.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.002 --rc genhtml_branch_coverage=1 00:23:48.002 --rc genhtml_function_coverage=1 00:23:48.002 --rc genhtml_legend=1 00:23:48.002 --rc geninfo_all_blocks=1 00:23:48.002 --rc geninfo_unexecuted_blocks=1 00:23:48.002 00:23:48.002 ' 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:48.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.002 --rc genhtml_branch_coverage=1 00:23:48.002 --rc genhtml_function_coverage=1 00:23:48.002 --rc genhtml_legend=1 00:23:48.002 --rc geninfo_all_blocks=1 00:23:48.002 --rc geninfo_unexecuted_blocks=1 00:23:48.002 00:23:48.002 ' 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:48.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.002 --rc genhtml_branch_coverage=1 00:23:48.002 --rc genhtml_function_coverage=1 00:23:48.002 --rc genhtml_legend=1 00:23:48.002 --rc geninfo_all_blocks=1 00:23:48.002 --rc geninfo_unexecuted_blocks=1 00:23:48.002 00:23:48.002 ' 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:48.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.002 --rc genhtml_branch_coverage=1 00:23:48.002 --rc genhtml_function_coverage=1 00:23:48.002 --rc genhtml_legend=1 00:23:48.002 --rc geninfo_all_blocks=1 00:23:48.002 --rc geninfo_unexecuted_blocks=1 00:23:48.002 00:23:48.002 ' 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:48.002 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.003 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.261 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.261 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.261 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.261 05:48:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.261 05:48:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:48.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:48.262 00:23:48.262 real 0m0.209s 00:23:48.262 user 0m0.125s 00:23:48.262 sys 0m0.097s 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:48.262 ************************************ 00:23:48.262 END TEST dma 00:23:48.262 ************************************ 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.262 ************************************ 00:23:48.262 START TEST nvmf_identify 00:23:48.262 ************************************ 00:23:48.262 05:48:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:48.262 * Looking for test storage... 00:23:48.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.262 --rc genhtml_branch_coverage=1 00:23:48.262 --rc genhtml_function_coverage=1 00:23:48.262 --rc genhtml_legend=1 00:23:48.262 --rc geninfo_all_blocks=1 00:23:48.262 --rc geninfo_unexecuted_blocks=1 00:23:48.262 00:23:48.262 ' 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.262 --rc genhtml_branch_coverage=1 00:23:48.262 --rc genhtml_function_coverage=1 00:23:48.262 --rc genhtml_legend=1 00:23:48.262 --rc geninfo_all_blocks=1 00:23:48.262 --rc geninfo_unexecuted_blocks=1 00:23:48.262 00:23:48.262 ' 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.262 --rc genhtml_branch_coverage=1 00:23:48.262 --rc genhtml_function_coverage=1 00:23:48.262 --rc genhtml_legend=1 00:23:48.262 --rc geninfo_all_blocks=1 00:23:48.262 --rc geninfo_unexecuted_blocks=1 00:23:48.262 00:23:48.262 ' 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.262 --rc genhtml_branch_coverage=1 00:23:48.262 --rc genhtml_function_coverage=1 00:23:48.262 --rc genhtml_legend=1 00:23:48.262 --rc geninfo_all_blocks=1 00:23:48.262 --rc geninfo_unexecuted_blocks=1 00:23:48.262 00:23:48.262 ' 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.262 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.521 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:48.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:48.522 05:48:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:55.098 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.098 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:55.099 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:55.099 Found net devices under 0000:af:00.0: cvl_0_0 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:55.099 Found net devices under 0000:af:00.1: cvl_0_1 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.099 05:48:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:23:55.099 00:23:55.099 --- 10.0.0.2 ping statistics --- 00:23:55.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.099 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:55.099 00:23:55.099 --- 10.0.0.1 ping statistics --- 00:23:55.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.099 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1273609 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1273609 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1273609 ']' 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.099 [2024-12-10 05:48:42.118044] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:23:55.099 [2024-12-10 05:48:42.118086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.099 [2024-12-10 05:48:42.194127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.099 [2024-12-10 05:48:42.236968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.099 [2024-12-10 05:48:42.237005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.099 [2024-12-10 05:48:42.237012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.099 [2024-12-10 05:48:42.237018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.099 [2024-12-10 05:48:42.237023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.099 [2024-12-10 05:48:42.238324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.099 [2024-12-10 05:48:42.238431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.099 [2024-12-10 05:48:42.238539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.099 [2024-12-10 05:48:42.238540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.099 [2024-12-10 05:48:42.338624] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.099 Malloc0 00:23:55.099 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.100 [2024-12-10 05:48:42.440401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.100 [ 00:23:55.100 { 00:23:55.100 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:55.100 "subtype": "Discovery", 00:23:55.100 "listen_addresses": [ 00:23:55.100 { 00:23:55.100 "trtype": "TCP", 00:23:55.100 "adrfam": "IPv4", 00:23:55.100 "traddr": "10.0.0.2", 00:23:55.100 "trsvcid": "4420" 00:23:55.100 } 00:23:55.100 ], 00:23:55.100 "allow_any_host": true, 00:23:55.100 "hosts": [] 00:23:55.100 }, 00:23:55.100 { 00:23:55.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.100 "subtype": "NVMe", 00:23:55.100 "listen_addresses": [ 00:23:55.100 { 00:23:55.100 "trtype": "TCP", 00:23:55.100 "adrfam": "IPv4", 00:23:55.100 "traddr": "10.0.0.2", 00:23:55.100 "trsvcid": "4420" 00:23:55.100 } 00:23:55.100 ], 00:23:55.100 "allow_any_host": true, 00:23:55.100 "hosts": [], 00:23:55.100 "serial_number": "SPDK00000000000001", 00:23:55.100 "model_number": "SPDK bdev Controller", 00:23:55.100 "max_namespaces": 32, 00:23:55.100 "min_cntlid": 1, 00:23:55.100 "max_cntlid": 65519, 00:23:55.100 "namespaces": [ 00:23:55.100 { 00:23:55.100 "nsid": 1, 00:23:55.100 "bdev_name": "Malloc0", 00:23:55.100 "name": "Malloc0", 00:23:55.100 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:55.100 "eui64": "ABCDEF0123456789", 00:23:55.100 "uuid": "53469ed9-13ba-43a6-8f9f-30d851288235" 00:23:55.100 } 00:23:55.100 ] 00:23:55.100 } 00:23:55.100 ] 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.100 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:55.100 [2024-12-10 05:48:42.493079] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:23:55.100 [2024-12-10 05:48:42.493113] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273699 ] 00:23:55.100 [2024-12-10 05:48:42.532658] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:55.100 [2024-12-10 05:48:42.532701] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:55.100 [2024-12-10 05:48:42.532706] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:55.100 [2024-12-10 05:48:42.532716] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:55.100 [2024-12-10 05:48:42.532726] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:55.100 [2024-12-10 05:48:42.536401] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:55.100 [2024-12-10 05:48:42.536434] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d2a690 0 00:23:55.100 [2024-12-10 05:48:42.543188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:55.100 [2024-12-10 05:48:42.543202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:55.100 [2024-12-10 05:48:42.543206] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:55.100 [2024-12-10 05:48:42.543208] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:55.100 [2024-12-10 05:48:42.543241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.543246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.543250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2a690) 00:23:55.100 [2024-12-10 05:48:42.543261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:55.100 [2024-12-10 05:48:42.543278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c100, cid 0, qid 0 00:23:55.100 [2024-12-10 05:48:42.551175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.100 [2024-12-10 05:48:42.551183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.100 [2024-12-10 05:48:42.551189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c100) on tqpair=0x1d2a690 00:23:55.100 [2024-12-10 05:48:42.551206] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:55.100 [2024-12-10 05:48:42.551212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:55.100 [2024-12-10 05:48:42.551216] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:55.100 [2024-12-10 05:48:42.551228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2a690) 00:23:55.100 [2024-12-10 05:48:42.551241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.100 [2024-12-10 05:48:42.551253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c100, cid 0, qid 0 00:23:55.100 [2024-12-10 05:48:42.551424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.100 [2024-12-10 05:48:42.551430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.100 [2024-12-10 05:48:42.551433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c100) on tqpair=0x1d2a690 00:23:55.100 [2024-12-10 05:48:42.551441] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:55.100 [2024-12-10 05:48:42.551447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:55.100 [2024-12-10 05:48:42.551453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2a690) 00:23:55.100 [2024-12-10 05:48:42.551465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.100 [2024-12-10 05:48:42.551475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c100, cid 0, qid 0 00:23:55.100 [2024-12-10 05:48:42.551534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.100 [2024-12-10 05:48:42.551539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.100 [2024-12-10 05:48:42.551542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c100) on tqpair=0x1d2a690 00:23:55.100 [2024-12-10 05:48:42.551550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:55.100 [2024-12-10 05:48:42.551556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:55.100 [2024-12-10 05:48:42.551562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2a690) 00:23:55.100 [2024-12-10 05:48:42.551574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.100 [2024-12-10 05:48:42.551583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c100, cid 0, qid 0 00:23:55.100 [2024-12-10 05:48:42.551646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.100 [2024-12-10 05:48:42.551651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.100 [2024-12-10 05:48:42.551656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c100) on tqpair=0x1d2a690 00:23:55.100 [2024-12-10 05:48:42.551664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:55.100 [2024-12-10 05:48:42.551672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2a690) 00:23:55.100 [2024-12-10 05:48:42.551684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.100 [2024-12-10 05:48:42.551693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c100, cid 0, qid 0 00:23:55.100 [2024-12-10 05:48:42.551756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.100 [2024-12-10 05:48:42.551762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.100 [2024-12-10 05:48:42.551765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.100 [2024-12-10 05:48:42.551768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c100) on tqpair=0x1d2a690 00:23:55.101 [2024-12-10 05:48:42.551771] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:55.101 [2024-12-10 05:48:42.551776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:55.101 [2024-12-10 05:48:42.551783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:55.101 [2024-12-10 05:48:42.551890] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:55.101 [2024-12-10 05:48:42.551895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:55.101 [2024-12-10 05:48:42.551902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.551905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.551908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2a690) 00:23:55.101 [2024-12-10 05:48:42.551913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.101 [2024-12-10 05:48:42.551923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c100, cid 0, qid 0 00:23:55.101 [2024-12-10 05:48:42.551987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.101 [2024-12-10 05:48:42.551993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.101 [2024-12-10 05:48:42.551995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.551999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c100) on tqpair=0x1d2a690 00:23:55.101 [2024-12-10 05:48:42.552003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:55.101 [2024-12-10 05:48:42.552010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.552014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.552017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2a690) 00:23:55.101 [2024-12-10 05:48:42.552022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.101 [2024-12-10 05:48:42.552031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c100, cid 0, qid 0 00:23:55.101 [2024-12-10 05:48:42.552099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.101 [2024-12-10 05:48:42.552105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.101 [2024-12-10 05:48:42.552108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.552111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c100) on tqpair=0x1d2a690 00:23:55.101 [2024-12-10 05:48:42.552115] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:55.101 [2024-12-10 05:48:42.552119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:55.101 [2024-12-10 05:48:42.552125] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:55.101 [2024-12-10 05:48:42.552132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:55.101 [2024-12-10 05:48:42.552141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.552145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2a690) 00:23:55.101 [2024-12-10 05:48:42.552150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.101 [2024-12-10 05:48:42.552159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c100, cid 0, qid 0 00:23:55.101 [2024-12-10 05:48:42.552254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.101 [2024-12-10 05:48:42.552260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.101 [2024-12-10 05:48:42.552263] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.552266] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2a690): datao=0, datal=4096, cccid=0 00:23:55.101 [2024-12-10 05:48:42.552270] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8c100) on tqpair(0x1d2a690): expected_datao=0, payload_size=4096 00:23:55.101 [2024-12-10 05:48:42.552274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.552288] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.552292] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.101 [2024-12-10 05:48:42.597183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.101 [2024-12-10 05:48:42.597186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c100) on tqpair=0x1d2a690 00:23:55.101 [2024-12-10 05:48:42.597197] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:55.101 [2024-12-10 05:48:42.597201] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:55.101 [2024-12-10 05:48:42.597205] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:55.101 [2024-12-10 05:48:42.597210] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:55.101 [2024-12-10 05:48:42.597214] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:55.101 [2024-12-10 05:48:42.597218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:55.101 [2024-12-10 05:48:42.597226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:55.101 [2024-12-10 05:48:42.597232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2a690) 00:23:55.101 [2024-12-10 05:48:42.597250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.101 [2024-12-10 05:48:42.597262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c100, cid 0, qid 0 00:23:55.101 [2024-12-10 05:48:42.597409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.101 [2024-12-10 05:48:42.597415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.101 [2024-12-10 05:48:42.597418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c100) on tqpair=0x1d2a690 00:23:55.101 [2024-12-10 05:48:42.597428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2a690) 00:23:55.101 [2024-12-10 05:48:42.597439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.101 [2024-12-10 05:48:42.597445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d2a690) 00:23:55.101 [2024-12-10 05:48:42.597456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.101 [2024-12-10 05:48:42.597460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d2a690) 00:23:55.101 [2024-12-10 05:48:42.597471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.101 [2024-12-10 05:48:42.597476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.101 [2024-12-10 05:48:42.597487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.101 [2024-12-10 05:48:42.597491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:55.101 [2024-12-10 05:48:42.597503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:55.101 [2024-12-10 05:48:42.597508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2a690) 00:23:55.101 [2024-12-10 05:48:42.597517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.101 [2024-12-10 05:48:42.597527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c100, cid 0, qid 0 00:23:55.101 [2024-12-10 05:48:42.597532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c280, cid 1, qid 0 00:23:55.101 [2024-12-10 05:48:42.597536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c400, cid 2, qid 0 00:23:55.101 [2024-12-10 05:48:42.597540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.101 [2024-12-10 05:48:42.597545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c700, cid 4, qid 0 00:23:55.101 [2024-12-10 05:48:42.597637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.101 [2024-12-10 05:48:42.597642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.101 [2024-12-10 05:48:42.597645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c700) on tqpair=0x1d2a690 00:23:55.101 [2024-12-10 05:48:42.597653] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:55.101 [2024-12-10 05:48:42.597657] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:55.101 [2024-12-10 05:48:42.597666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.101 [2024-12-10 05:48:42.597670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2a690) 00:23:55.101 [2024-12-10 05:48:42.597675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.101 [2024-12-10 05:48:42.597684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c700, cid 4, qid 0 00:23:55.101 [2024-12-10 05:48:42.597762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.101 [2024-12-10 05:48:42.597768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.101 [2024-12-10 05:48:42.597771] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.597774] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2a690): datao=0, datal=4096, cccid=4 00:23:55.102 [2024-12-10 05:48:42.597778] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8c700) on tqpair(0x1d2a690): expected_datao=0, payload_size=4096 00:23:55.102 [2024-12-10 05:48:42.597782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.597788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.597791] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.597804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.102 [2024-12-10 05:48:42.597809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.102 [2024-12-10 05:48:42.597812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.597815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c700) on tqpair=0x1d2a690 00:23:55.102 [2024-12-10 05:48:42.597825] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:55.102 [2024-12-10 05:48:42.597844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.597849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2a690) 00:23:55.102 [2024-12-10 05:48:42.597854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.102 [2024-12-10 05:48:42.597860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.597863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.597866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d2a690) 00:23:55.102 [2024-12-10 05:48:42.597871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.102 [2024-12-10 05:48:42.597883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c700, cid 4, qid 0 00:23:55.102 [2024-12-10 05:48:42.597888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c880, cid 5, qid 0 00:23:55.102 [2024-12-10 05:48:42.597985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.102 [2024-12-10 05:48:42.597990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.102 [2024-12-10 05:48:42.597995] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.597998] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2a690): datao=0, datal=1024, cccid=4 00:23:55.102 [2024-12-10 05:48:42.598002] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8c700) on tqpair(0x1d2a690): expected_datao=0, payload_size=1024 00:23:55.102 [2024-12-10 05:48:42.598006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.598011] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.598014] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.598019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.102 [2024-12-10 05:48:42.598024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.102 [2024-12-10 05:48:42.598027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.598030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c880) on tqpair=0x1d2a690 00:23:55.102 [2024-12-10 05:48:42.639310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.102 [2024-12-10 05:48:42.639320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.102 [2024-12-10 05:48:42.639324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.639327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c700) on tqpair=0x1d2a690 00:23:55.102 [2024-12-10 05:48:42.639337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.639341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2a690) 00:23:55.102 [2024-12-10 05:48:42.639347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.102 [2024-12-10 05:48:42.639361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c700, cid 4, qid 0 00:23:55.102 [2024-12-10 05:48:42.639439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.102 [2024-12-10 05:48:42.639444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.102 [2024-12-10 05:48:42.639448] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.639451] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2a690): datao=0, datal=3072, cccid=4 00:23:55.102 [2024-12-10 05:48:42.639454] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8c700) on tqpair(0x1d2a690): expected_datao=0, payload_size=3072 00:23:55.102 [2024-12-10 05:48:42.639458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.639464] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.639467] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.639494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.102 [2024-12-10 05:48:42.639499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.102 [2024-12-10 05:48:42.639502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.639505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c700) on tqpair=0x1d2a690 00:23:55.102 [2024-12-10 05:48:42.639512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.639515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2a690) 00:23:55.102 [2024-12-10 05:48:42.639521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.102 [2024-12-10 05:48:42.639534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c700, cid 4, qid 0 00:23:55.102 [2024-12-10 05:48:42.639601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.102 [2024-12-10 05:48:42.639606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.102 [2024-12-10 05:48:42.639609] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.639615] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2a690): datao=0, datal=8, cccid=4 00:23:55.102 [2024-12-10 05:48:42.639619] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8c700) on tqpair(0x1d2a690): expected_datao=0, payload_size=8 00:23:55.102 [2024-12-10 05:48:42.639622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.639627] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.639631] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.685177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.102 [2024-12-10 05:48:42.685186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.102 [2024-12-10 05:48:42.685189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.102 [2024-12-10 05:48:42.685192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c700) on tqpair=0x1d2a690 00:23:55.102 ===================================================== 00:23:55.102 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:55.102 ===================================================== 00:23:55.102 Controller Capabilities/Features 00:23:55.102 ================================ 00:23:55.102 Vendor ID: 0000 00:23:55.102 Subsystem Vendor ID: 0000 00:23:55.102 Serial Number: .................... 00:23:55.102 Model Number: ........................................ 00:23:55.102 Firmware Version: 25.01 00:23:55.102 Recommended Arb Burst: 0 00:23:55.102 IEEE OUI Identifier: 00 00 00 00:23:55.102 Multi-path I/O 00:23:55.102 May have multiple subsystem ports: No 00:23:55.102 May have multiple controllers: No 00:23:55.102 Associated with SR-IOV VF: No 00:23:55.102 Max Data Transfer Size: 131072 00:23:55.102 Max Number of Namespaces: 0 00:23:55.102 Max Number of I/O Queues: 1024 00:23:55.102 NVMe Specification Version (VS): 1.3 00:23:55.102 NVMe Specification Version (Identify): 1.3 00:23:55.102 Maximum Queue Entries: 128 00:23:55.102 Contiguous Queues Required: Yes 00:23:55.102 Arbitration Mechanisms Supported 00:23:55.102 Weighted Round Robin: Not Supported 00:23:55.102 Vendor Specific: Not Supported 00:23:55.102 Reset Timeout: 15000 ms 00:23:55.102 Doorbell Stride: 4 bytes 00:23:55.102 NVM Subsystem Reset: Not Supported 00:23:55.102 Command Sets Supported 00:23:55.102 NVM Command Set: Supported 00:23:55.102 Boot Partition: Not Supported 00:23:55.102 Memory Page Size Minimum: 4096 bytes 00:23:55.102 Memory Page Size Maximum: 4096 bytes 00:23:55.102 Persistent Memory Region: Not Supported 00:23:55.102 Optional Asynchronous Events Supported 00:23:55.102 Namespace Attribute Notices: Not Supported 00:23:55.102 Firmware Activation Notices: Not Supported 00:23:55.102 ANA Change Notices: Not Supported 00:23:55.102 PLE Aggregate Log Change Notices: Not Supported 00:23:55.102 LBA Status Info Alert Notices: Not Supported 00:23:55.102 EGE Aggregate Log Change Notices: Not Supported 00:23:55.102 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.102 Zone Descriptor Change Notices: Not Supported 00:23:55.102 Discovery Log Change Notices: Supported 00:23:55.102 Controller Attributes 00:23:55.102 128-bit Host Identifier: Not Supported 00:23:55.102 Non-Operational Permissive Mode: Not Supported 00:23:55.102 NVM Sets: Not Supported 00:23:55.102 Read Recovery Levels: Not Supported 00:23:55.102 Endurance Groups: Not Supported 00:23:55.102 Predictable Latency Mode: Not Supported 00:23:55.102 Traffic Based Keep ALive: Not Supported 00:23:55.102 Namespace Granularity: Not Supported 00:23:55.102 SQ Associations: Not Supported 00:23:55.102 UUID List: Not Supported 00:23:55.102 Multi-Domain Subsystem: Not Supported 00:23:55.102 Fixed Capacity Management: Not Supported 00:23:55.102 Variable Capacity Management: Not Supported 00:23:55.102 Delete Endurance Group: Not Supported 00:23:55.102 Delete NVM Set: Not Supported 00:23:55.102 Extended LBA Formats Supported: Not Supported 00:23:55.102 Flexible Data Placement Supported: Not Supported 00:23:55.102 00:23:55.102 Controller Memory Buffer Support 00:23:55.102 ================================ 00:23:55.102 Supported: No 00:23:55.102 00:23:55.102 Persistent Memory Region Support 00:23:55.102 ================================ 00:23:55.102 Supported: No 00:23:55.102 00:23:55.102 Admin Command Set Attributes 00:23:55.102 ============================ 00:23:55.102 Security Send/Receive: Not Supported 00:23:55.102 Format NVM: Not Supported 00:23:55.102 Firmware Activate/Download: Not Supported 00:23:55.103 Namespace Management: Not Supported 00:23:55.103 Device Self-Test: Not Supported 00:23:55.103 Directives: Not Supported 00:23:55.103 NVMe-MI: Not Supported 00:23:55.103 Virtualization Management: Not Supported 00:23:55.103 Doorbell Buffer Config: Not Supported 00:23:55.103 Get LBA Status Capability: Not Supported 00:23:55.103 Command & Feature Lockdown Capability: Not Supported 00:23:55.103 Abort Command Limit: 1 00:23:55.103 Async Event Request Limit: 4 00:23:55.103 Number of Firmware Slots: N/A 00:23:55.103 Firmware Slot 1 Read-Only: N/A 00:23:55.103 Firmware Activation Without Reset: N/A 00:23:55.103 Multiple Update Detection Support: N/A 00:23:55.103 Firmware Update Granularity: No Information Provided 00:23:55.103 Per-Namespace SMART Log: No 00:23:55.103 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.103 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:55.103 Command Effects Log Page: Not Supported 00:23:55.103 Get Log Page Extended Data: Supported 00:23:55.103 Telemetry Log Pages: Not Supported 00:23:55.103 Persistent Event Log Pages: Not Supported 00:23:55.103 Supported Log Pages Log Page: May Support 00:23:55.103 Commands Supported & Effects Log Page: Not Supported 00:23:55.103 Feature Identifiers & Effects Log Page:May Support 00:23:55.103 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.103 Data Area 4 for Telemetry Log: Not Supported 00:23:55.103 Error Log Page Entries Supported: 128 00:23:55.103 Keep Alive: Not Supported 00:23:55.103 00:23:55.103 NVM Command Set Attributes 00:23:55.103 ========================== 00:23:55.103 Submission Queue Entry Size 00:23:55.103 Max: 1 00:23:55.103 Min: 1 00:23:55.103 Completion Queue Entry Size 00:23:55.103 Max: 1 00:23:55.103 Min: 1 00:23:55.103 Number of Namespaces: 0 00:23:55.103 Compare Command: Not Supported 00:23:55.103 Write Uncorrectable Command: Not Supported 00:23:55.103 Dataset Management Command: Not Supported 00:23:55.103 Write Zeroes Command: Not Supported 00:23:55.103 Set Features Save Field: Not Supported 00:23:55.103 Reservations: Not Supported 00:23:55.103 Timestamp: Not Supported 00:23:55.103 Copy: Not Supported 00:23:55.103 Volatile Write Cache: Not Present 00:23:55.103 Atomic Write Unit (Normal): 1 00:23:55.103 Atomic Write Unit (PFail): 1 00:23:55.103 Atomic Compare & Write Unit: 1 00:23:55.103 Fused Compare & Write: Supported 00:23:55.103 Scatter-Gather List 00:23:55.103 SGL Command Set: Supported 00:23:55.103 SGL Keyed: Supported 00:23:55.103 SGL Bit Bucket Descriptor: Not Supported 00:23:55.103 SGL Metadata Pointer: Not Supported 00:23:55.103 Oversized SGL: Not Supported 00:23:55.103 SGL Metadata Address: Not Supported 00:23:55.103 SGL Offset: Supported 00:23:55.103 Transport SGL Data Block: Not Supported 00:23:55.103 Replay Protected Memory Block: Not Supported 00:23:55.103 00:23:55.103 Firmware Slot Information 00:23:55.103 ========================= 00:23:55.103 Active slot: 0 00:23:55.103 00:23:55.103 00:23:55.103 Error Log 00:23:55.103 ========= 00:23:55.103 00:23:55.103 Active Namespaces 00:23:55.103 ================= 00:23:55.103 Discovery Log Page 00:23:55.103 ================== 00:23:55.103 Generation Counter: 2 00:23:55.103 Number of Records: 2 00:23:55.103 Record Format: 0 00:23:55.103 00:23:55.103 Discovery Log Entry 0 00:23:55.103 ---------------------- 00:23:55.103 Transport Type: 3 (TCP) 00:23:55.103 Address Family: 1 (IPv4) 00:23:55.103 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:55.103 Entry Flags: 00:23:55.103 Duplicate Returned Information: 1 00:23:55.103 Explicit Persistent Connection Support for Discovery: 1 00:23:55.103 Transport Requirements: 00:23:55.103 Secure Channel: Not Required 00:23:55.103 Port ID: 0 (0x0000) 00:23:55.103 Controller ID: 65535 (0xffff) 00:23:55.103 Admin Max SQ Size: 128 00:23:55.103 Transport Service Identifier: 4420 00:23:55.103 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:55.103 Transport Address: 10.0.0.2 00:23:55.103 Discovery Log Entry 1 00:23:55.103 ---------------------- 00:23:55.103 Transport Type: 3 (TCP) 00:23:55.103 Address Family: 1 (IPv4) 00:23:55.103 Subsystem Type: 2 (NVM Subsystem) 00:23:55.103 Entry Flags: 00:23:55.103 Duplicate Returned Information: 0 00:23:55.103 Explicit Persistent Connection Support for Discovery: 0 00:23:55.103 Transport Requirements: 00:23:55.103 Secure Channel: Not Required 00:23:55.103 Port ID: 0 (0x0000) 00:23:55.103 Controller ID: 65535 (0xffff) 00:23:55.103 Admin Max SQ Size: 128 00:23:55.103 Transport Service Identifier: 4420 00:23:55.103 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:55.103 Transport Address: 10.0.0.2 [2024-12-10 05:48:42.685270] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:55.103 [2024-12-10 05:48:42.685280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c100) on tqpair=0x1d2a690 00:23:55.103 [2024-12-10 05:48:42.685285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.103 [2024-12-10 05:48:42.685290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c280) on tqpair=0x1d2a690 00:23:55.103 [2024-12-10 05:48:42.685294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.103 [2024-12-10 05:48:42.685298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c400) on tqpair=0x1d2a690 00:23:55.103 [2024-12-10 05:48:42.685302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.103 [2024-12-10 05:48:42.685306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.103 [2024-12-10 05:48:42.685310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.103 [2024-12-10 05:48:42.685317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.103 [2024-12-10 05:48:42.685321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.103 [2024-12-10 05:48:42.685324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.103 [2024-12-10 05:48:42.685330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.103 [2024-12-10 05:48:42.685343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.103 [2024-12-10 05:48:42.685404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.103 [2024-12-10 05:48:42.685410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.103 [2024-12-10 05:48:42.685413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.103 [2024-12-10 05:48:42.685416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.103 [2024-12-10 05:48:42.685422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.103 [2024-12-10 05:48:42.685425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.103 [2024-12-10 05:48:42.685428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.103 [2024-12-10 05:48:42.685434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.103 [2024-12-10 05:48:42.685446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.103 [2024-12-10 05:48:42.685520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.103 [2024-12-10 05:48:42.685526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.103 [2024-12-10 05:48:42.685530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.103 [2024-12-10 05:48:42.685534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.103 [2024-12-10 05:48:42.685538] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:55.103 [2024-12-10 05:48:42.685542] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:55.104 [2024-12-10 05:48:42.685549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.685561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.685571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.685633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.685639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.685642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.685653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.685665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.685674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.685733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.685739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.685741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.685752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.685764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.685773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.685843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.685848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.685851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.685862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.685874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.685883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.685951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.685956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.685959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.685970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.685976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.685982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.685991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.686051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.686056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.686059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.686070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.686081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.686090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.686147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.686152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.686155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.686171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.686183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.686192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.686262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.686268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.686271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.686282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.686294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.686302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.686368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.686373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.686378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.686389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.686401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.686410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.686469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.686475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.686478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.686490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.686501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.686511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.686570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.686576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.686579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.686590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.686601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.686611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.686680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.686685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.686688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.686699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.104 [2024-12-10 05:48:42.686711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.104 [2024-12-10 05:48:42.686720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.104 [2024-12-10 05:48:42.686791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.104 [2024-12-10 05:48:42.686796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.104 [2024-12-10 05:48:42.686799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.104 [2024-12-10 05:48:42.686812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.104 [2024-12-10 05:48:42.686816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.686819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.105 [2024-12-10 05:48:42.686824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.105 [2024-12-10 05:48:42.686833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.105 [2024-12-10 05:48:42.686895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.105 [2024-12-10 05:48:42.686900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.105 [2024-12-10 05:48:42.686903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.686906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.105 [2024-12-10 05:48:42.686914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.686918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.686921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.105 [2024-12-10 05:48:42.686926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.105 [2024-12-10 05:48:42.686935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.105 [2024-12-10 05:48:42.686993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.105 [2024-12-10 05:48:42.686998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.105 [2024-12-10 05:48:42.687001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.687004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.105 [2024-12-10 05:48:42.687012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.687015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.687018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.105 [2024-12-10 05:48:42.687024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.105 [2024-12-10 05:48:42.687033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.105 [2024-12-10 05:48:42.687120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.105 [2024-12-10 05:48:42.687125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.105 [2024-12-10 05:48:42.687128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.687131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.105 [2024-12-10 05:48:42.687140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.687143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.687146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.105 [2024-12-10 05:48:42.687151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.105 [2024-12-10 05:48:42.687160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.105 [2024-12-10 05:48:42.691171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.105 [2024-12-10 05:48:42.691179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.105 [2024-12-10 05:48:42.691182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.691185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.105 [2024-12-10 05:48:42.691198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.691202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.691205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2a690) 00:23:55.105 [2024-12-10 05:48:42.691210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.105 [2024-12-10 05:48:42.691221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c580, cid 3, qid 0 00:23:55.105 [2024-12-10 05:48:42.691372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.105 [2024-12-10 05:48:42.691377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.105 [2024-12-10 05:48:42.691380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.691384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8c580) on tqpair=0x1d2a690 00:23:55.105 [2024-12-10 05:48:42.691390] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:23:55.105 00:23:55.105 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:55.105 [2024-12-10 05:48:42.728406] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:23:55.105 [2024-12-10 05:48:42.728445] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273701 ] 00:23:55.105 [2024-12-10 05:48:42.769335] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:55.105 [2024-12-10 05:48:42.769375] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:55.105 [2024-12-10 05:48:42.769380] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:55.105 [2024-12-10 05:48:42.769391] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:55.105 [2024-12-10 05:48:42.769400] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:55.105 [2024-12-10 05:48:42.769730] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:55.105 [2024-12-10 05:48:42.769756] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1196690 0 00:23:55.105 [2024-12-10 05:48:42.780176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:55.105 [2024-12-10 05:48:42.780190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:55.105 [2024-12-10 05:48:42.780194] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:55.105 [2024-12-10 05:48:42.780197] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:55.105 [2024-12-10 05:48:42.780224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.780229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.780232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1196690) 00:23:55.105 [2024-12-10 05:48:42.780242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:55.105 [2024-12-10 05:48:42.780258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8100, cid 0, qid 0 00:23:55.105 [2024-12-10 05:48:42.791174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.105 [2024-12-10 05:48:42.791186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.105 [2024-12-10 05:48:42.791189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.791193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8100) on tqpair=0x1196690 00:23:55.105 [2024-12-10 05:48:42.791201] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:55.105 [2024-12-10 05:48:42.791206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:55.105 [2024-12-10 05:48:42.791211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:55.105 [2024-12-10 05:48:42.791221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.791225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.791228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1196690) 00:23:55.105 [2024-12-10 05:48:42.791234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.105 [2024-12-10 05:48:42.791247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8100, cid 0, qid 0 00:23:55.105 [2024-12-10 05:48:42.791334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.105 [2024-12-10 05:48:42.791340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.105 [2024-12-10 05:48:42.791343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.791346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8100) on tqpair=0x1196690 00:23:55.105 [2024-12-10 05:48:42.791350] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:55.105 [2024-12-10 05:48:42.791357] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:55.105 [2024-12-10 05:48:42.791363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.791366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.791369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1196690) 00:23:55.105 [2024-12-10 05:48:42.791375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.105 [2024-12-10 05:48:42.791385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8100, cid 0, qid 0 00:23:55.105 [2024-12-10 05:48:42.791446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.105 [2024-12-10 05:48:42.791452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.105 [2024-12-10 05:48:42.791455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.791458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8100) on tqpair=0x1196690 00:23:55.105 [2024-12-10 05:48:42.791462] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:55.105 [2024-12-10 05:48:42.791469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:55.105 [2024-12-10 05:48:42.791475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.791478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.791481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1196690) 00:23:55.105 [2024-12-10 05:48:42.791487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.105 [2024-12-10 05:48:42.791496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8100, cid 0, qid 0 00:23:55.105 [2024-12-10 05:48:42.791560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.105 [2024-12-10 05:48:42.791566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.105 [2024-12-10 05:48:42.791571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.105 [2024-12-10 05:48:42.791575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8100) on tqpair=0x1196690 00:23:55.105 [2024-12-10 05:48:42.791579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:55.105 [2024-12-10 05:48:42.791587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.791590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.791593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1196690) 00:23:55.106 [2024-12-10 05:48:42.791599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.106 [2024-12-10 05:48:42.791608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8100, cid 0, qid 0 00:23:55.106 [2024-12-10 05:48:42.791669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.106 [2024-12-10 05:48:42.791674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.106 [2024-12-10 05:48:42.791677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.791680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8100) on tqpair=0x1196690 00:23:55.106 [2024-12-10 05:48:42.791684] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:55.106 [2024-12-10 05:48:42.791688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:55.106 [2024-12-10 05:48:42.791694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:55.106 [2024-12-10 05:48:42.791802] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:55.106 [2024-12-10 05:48:42.791806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:55.106 [2024-12-10 05:48:42.791812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.791816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.791819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1196690) 00:23:55.106 [2024-12-10 05:48:42.791824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.106 [2024-12-10 05:48:42.791834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8100, cid 0, qid 0 00:23:55.106 [2024-12-10 05:48:42.791908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.106 [2024-12-10 05:48:42.791913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.106 [2024-12-10 05:48:42.791916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.791919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8100) on tqpair=0x1196690 00:23:55.106 [2024-12-10 05:48:42.791923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:55.106 [2024-12-10 05:48:42.791931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.791935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.791938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1196690) 00:23:55.106 [2024-12-10 05:48:42.791943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.106 [2024-12-10 05:48:42.791953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8100, cid 0, qid 0 00:23:55.106 [2024-12-10 05:48:42.792023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.106 [2024-12-10 05:48:42.792030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.106 [2024-12-10 05:48:42.792033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.792036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8100) on tqpair=0x1196690 00:23:55.106 [2024-12-10 05:48:42.792040] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:55.106 [2024-12-10 05:48:42.792044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:55.106 [2024-12-10 05:48:42.792052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:55.106 [2024-12-10 05:48:42.792058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:55.106 [2024-12-10 05:48:42.792069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.792072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1196690) 00:23:55.106 [2024-12-10 05:48:42.792078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.106 [2024-12-10 05:48:42.792089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8100, cid 0, qid 0 00:23:55.106 [2024-12-10 05:48:42.792195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.106 [2024-12-10 05:48:42.792201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.106 [2024-12-10 05:48:42.792204] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.792207] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1196690): datao=0, datal=4096, cccid=0 00:23:55.106 [2024-12-10 05:48:42.792211] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f8100) on tqpair(0x1196690): expected_datao=0, payload_size=4096 00:23:55.106 [2024-12-10 05:48:42.792215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.792225] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.792228] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.106 [2024-12-10 05:48:42.836182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.106 [2024-12-10 05:48:42.836185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8100) on tqpair=0x1196690 00:23:55.106 [2024-12-10 05:48:42.836195] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:55.106 [2024-12-10 05:48:42.836200] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:55.106 [2024-12-10 05:48:42.836204] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:55.106 [2024-12-10 05:48:42.836207] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:55.106 [2024-12-10 05:48:42.836211] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:55.106 [2024-12-10 05:48:42.836215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:55.106 [2024-12-10 05:48:42.836224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:55.106 [2024-12-10 05:48:42.836230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1196690) 00:23:55.106 [2024-12-10 05:48:42.836246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.106 [2024-12-10 05:48:42.836258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8100, cid 0, qid 0 00:23:55.106 [2024-12-10 05:48:42.836323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.106 [2024-12-10 05:48:42.836328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.106 [2024-12-10 05:48:42.836331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8100) on tqpair=0x1196690 00:23:55.106 [2024-12-10 05:48:42.836340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1196690) 00:23:55.106 [2024-12-10 05:48:42.836351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.106 [2024-12-10 05:48:42.836356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1196690) 00:23:55.106 [2024-12-10 05:48:42.836367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.106 [2024-12-10 05:48:42.836372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1196690) 00:23:55.106 [2024-12-10 05:48:42.836383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.106 [2024-12-10 05:48:42.836388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.106 [2024-12-10 05:48:42.836399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.106 [2024-12-10 05:48:42.836403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:55.106 [2024-12-10 05:48:42.836414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:55.106 [2024-12-10 05:48:42.836419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1196690) 00:23:55.106 [2024-12-10 05:48:42.836428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.106 [2024-12-10 05:48:42.836439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8100, cid 0, qid 0 00:23:55.106 [2024-12-10 05:48:42.836444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8280, cid 1, qid 0 00:23:55.106 [2024-12-10 05:48:42.836448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8400, cid 2, qid 0 00:23:55.106 [2024-12-10 05:48:42.836451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.106 [2024-12-10 05:48:42.836455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8700, cid 4, qid 0 00:23:55.106 [2024-12-10 05:48:42.836553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.106 [2024-12-10 05:48:42.836561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.106 [2024-12-10 05:48:42.836564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.106 [2024-12-10 05:48:42.836567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8700) on tqpair=0x1196690 00:23:55.106 [2024-12-10 05:48:42.836571] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:55.106 [2024-12-10 05:48:42.836575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.836584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.836589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.836594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.836597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.836600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1196690) 00:23:55.107 [2024-12-10 05:48:42.836605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.107 [2024-12-10 05:48:42.836615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8700, cid 4, qid 0 00:23:55.107 [2024-12-10 05:48:42.836680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.107 [2024-12-10 05:48:42.836686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.107 [2024-12-10 05:48:42.836689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.836692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8700) on tqpair=0x1196690 00:23:55.107 [2024-12-10 05:48:42.836741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.836751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.836757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.836760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1196690) 00:23:55.107 [2024-12-10 05:48:42.836766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.107 [2024-12-10 05:48:42.836775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8700, cid 4, qid 0 00:23:55.107 [2024-12-10 05:48:42.836851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.107 [2024-12-10 05:48:42.836856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.107 [2024-12-10 05:48:42.836859] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.836863] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1196690): datao=0, datal=4096, cccid=4 00:23:55.107 [2024-12-10 05:48:42.836866] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f8700) on tqpair(0x1196690): expected_datao=0, payload_size=4096 00:23:55.107 [2024-12-10 05:48:42.836870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.836882] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.836886] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.107 [2024-12-10 05:48:42.879187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.107 [2024-12-10 05:48:42.879191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8700) on tqpair=0x1196690 00:23:55.107 [2024-12-10 05:48:42.879210] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:55.107 [2024-12-10 05:48:42.879221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.879230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.879237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1196690) 00:23:55.107 [2024-12-10 05:48:42.879247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.107 [2024-12-10 05:48:42.879260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8700, cid 4, qid 0 00:23:55.107 [2024-12-10 05:48:42.879361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.107 [2024-12-10 05:48:42.879368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.107 [2024-12-10 05:48:42.879371] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879374] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1196690): datao=0, datal=4096, cccid=4 00:23:55.107 [2024-12-10 05:48:42.879378] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f8700) on tqpair(0x1196690): expected_datao=0, payload_size=4096 00:23:55.107 [2024-12-10 05:48:42.879382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879388] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879391] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.107 [2024-12-10 05:48:42.879439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.107 [2024-12-10 05:48:42.879441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8700) on tqpair=0x1196690 00:23:55.107 [2024-12-10 05:48:42.879453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.879461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.879467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1196690) 00:23:55.107 [2024-12-10 05:48:42.879476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.107 [2024-12-10 05:48:42.879486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8700, cid 4, qid 0 00:23:55.107 [2024-12-10 05:48:42.879558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.107 [2024-12-10 05:48:42.879564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.107 [2024-12-10 05:48:42.879567] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879570] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1196690): datao=0, datal=4096, cccid=4 00:23:55.107 [2024-12-10 05:48:42.879574] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f8700) on tqpair(0x1196690): expected_datao=0, payload_size=4096 00:23:55.107 [2024-12-10 05:48:42.879577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879587] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.879590] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.920258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.107 [2024-12-10 05:48:42.920267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.107 [2024-12-10 05:48:42.920271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.920274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8700) on tqpair=0x1196690 00:23:55.107 [2024-12-10 05:48:42.920283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.920291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.920298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.920303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.920307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.920312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.920316] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:55.107 [2024-12-10 05:48:42.920320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:55.107 [2024-12-10 05:48:42.920325] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:55.107 [2024-12-10 05:48:42.920337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.920341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1196690) 00:23:55.107 [2024-12-10 05:48:42.920347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.107 [2024-12-10 05:48:42.920353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.920356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.920359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1196690) 00:23:55.107 [2024-12-10 05:48:42.920364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.107 [2024-12-10 05:48:42.920377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8700, cid 4, qid 0 00:23:55.107 [2024-12-10 05:48:42.920382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8880, cid 5, qid 0 00:23:55.107 [2024-12-10 05:48:42.920467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.107 [2024-12-10 05:48:42.920472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.107 [2024-12-10 05:48:42.920475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.107 [2024-12-10 05:48:42.920478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8700) on tqpair=0x1196690 00:23:55.108 [2024-12-10 05:48:42.920484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.108 [2024-12-10 05:48:42.920489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.108 [2024-12-10 05:48:42.920492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.920495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8880) on tqpair=0x1196690 00:23:55.108 [2024-12-10 05:48:42.920502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.920506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1196690) 00:23:55.108 [2024-12-10 05:48:42.920511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.108 [2024-12-10 05:48:42.920523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8880, cid 5, qid 0 00:23:55.108 [2024-12-10 05:48:42.920589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.108 [2024-12-10 05:48:42.920594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.108 [2024-12-10 05:48:42.920597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.920601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8880) on tqpair=0x1196690 00:23:55.108 [2024-12-10 05:48:42.920608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.920611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1196690) 00:23:55.108 [2024-12-10 05:48:42.920616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.108 [2024-12-10 05:48:42.920625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8880, cid 5, qid 0 00:23:55.108 [2024-12-10 05:48:42.920689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.108 [2024-12-10 05:48:42.920695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.108 [2024-12-10 05:48:42.920698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.920701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8880) on tqpair=0x1196690 00:23:55.108 [2024-12-10 05:48:42.920708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.920712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1196690) 00:23:55.108 [2024-12-10 05:48:42.920717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.108 [2024-12-10 05:48:42.920726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8880, cid 5, qid 0 00:23:55.108 [2024-12-10 05:48:42.920788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.108 [2024-12-10 05:48:42.920793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.108 [2024-12-10 05:48:42.920796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.920800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8880) on tqpair=0x1196690 00:23:55.108 [2024-12-10 05:48:42.920811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.920815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1196690) 00:23:55.108 [2024-12-10 05:48:42.920820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.108 [2024-12-10 05:48:42.920826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.920829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1196690) 00:23:55.108 [2024-12-10 05:48:42.920834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.108 [2024-12-10 05:48:42.920840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.920843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1196690) 00:23:55.108 [2024-12-10 05:48:42.920849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.108 [2024-12-10 05:48:42.920855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.920858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1196690) 00:23:55.108 [2024-12-10 05:48:42.920863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.108 [2024-12-10 05:48:42.920875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8880, cid 5, qid 0 00:23:55.108 [2024-12-10 05:48:42.920880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8700, cid 4, qid 0 00:23:55.108 [2024-12-10 05:48:42.920884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8a00, cid 6, qid 0 00:23:55.108 [2024-12-10 05:48:42.920888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8b80, cid 7, qid 0 00:23:55.108 [2024-12-10 05:48:42.921025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.108 [2024-12-10 05:48:42.921030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.108 [2024-12-10 05:48:42.921034] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921037] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1196690): datao=0, datal=8192, cccid=5 00:23:55.108 [2024-12-10 05:48:42.921040] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f8880) on tqpair(0x1196690): expected_datao=0, payload_size=8192 00:23:55.108 [2024-12-10 05:48:42.921044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921081] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921085] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.108 [2024-12-10 05:48:42.921094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.108 [2024-12-10 05:48:42.921097] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921100] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1196690): datao=0, datal=512, cccid=4 00:23:55.108 [2024-12-10 05:48:42.921104] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f8700) on tqpair(0x1196690): expected_datao=0, payload_size=512 00:23:55.108 [2024-12-10 05:48:42.921108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921113] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921116] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.108 [2024-12-10 05:48:42.921125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.108 [2024-12-10 05:48:42.921128] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921131] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1196690): datao=0, datal=512, cccid=6 00:23:55.108 [2024-12-10 05:48:42.921134] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f8a00) on tqpair(0x1196690): expected_datao=0, payload_size=512 00:23:55.108 [2024-12-10 05:48:42.921138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921143] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921146] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.108 [2024-12-10 05:48:42.921155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.108 [2024-12-10 05:48:42.921158] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921161] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1196690): datao=0, datal=4096, cccid=7 00:23:55.108 [2024-12-10 05:48:42.921165] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f8b80) on tqpair(0x1196690): expected_datao=0, payload_size=4096 00:23:55.108 [2024-12-10 05:48:42.921174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921179] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921182] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.108 [2024-12-10 05:48:42.921198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.108 [2024-12-10 05:48:42.921201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8880) on tqpair=0x1196690 00:23:55.108 [2024-12-10 05:48:42.921214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.108 [2024-12-10 05:48:42.921219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.108 [2024-12-10 05:48:42.921221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8700) on tqpair=0x1196690 00:23:55.108 [2024-12-10 05:48:42.921232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.108 [2024-12-10 05:48:42.921237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.108 [2024-12-10 05:48:42.921240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8a00) on tqpair=0x1196690 00:23:55.108 [2024-12-10 05:48:42.921249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.108 [2024-12-10 05:48:42.921254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.108 [2024-12-10 05:48:42.921257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.108 [2024-12-10 05:48:42.921260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8b80) on tqpair=0x1196690 00:23:55.108 ===================================================== 00:23:55.108 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.108 ===================================================== 00:23:55.108 Controller Capabilities/Features 00:23:55.108 ================================ 00:23:55.108 Vendor ID: 8086 00:23:55.108 Subsystem Vendor ID: 8086 00:23:55.108 Serial Number: SPDK00000000000001 00:23:55.108 Model Number: SPDK bdev Controller 00:23:55.108 Firmware Version: 25.01 00:23:55.108 Recommended Arb Burst: 6 00:23:55.108 IEEE OUI Identifier: e4 d2 5c 00:23:55.108 Multi-path I/O 00:23:55.108 May have multiple subsystem ports: Yes 00:23:55.108 May have multiple controllers: Yes 00:23:55.108 Associated with SR-IOV VF: No 00:23:55.108 Max Data Transfer Size: 131072 00:23:55.108 Max Number of Namespaces: 32 00:23:55.108 Max Number of I/O Queues: 127 00:23:55.108 NVMe Specification Version (VS): 1.3 00:23:55.108 NVMe Specification Version (Identify): 1.3 00:23:55.108 Maximum Queue Entries: 128 00:23:55.108 Contiguous Queues Required: Yes 00:23:55.108 Arbitration Mechanisms Supported 00:23:55.108 Weighted Round Robin: Not Supported 00:23:55.108 Vendor Specific: Not Supported 00:23:55.108 Reset Timeout: 15000 ms 00:23:55.108 Doorbell Stride: 4 bytes 00:23:55.108 NVM Subsystem Reset: Not Supported 00:23:55.108 Command Sets Supported 00:23:55.109 NVM Command Set: Supported 00:23:55.109 Boot Partition: Not Supported 00:23:55.109 Memory Page Size Minimum: 4096 bytes 00:23:55.109 Memory Page Size Maximum: 4096 bytes 00:23:55.109 Persistent Memory Region: Not Supported 00:23:55.109 Optional Asynchronous Events Supported 00:23:55.109 Namespace Attribute Notices: Supported 00:23:55.109 Firmware Activation Notices: Not Supported 00:23:55.109 ANA Change Notices: Not Supported 00:23:55.109 PLE Aggregate Log Change Notices: Not Supported 00:23:55.109 LBA Status Info Alert Notices: Not Supported 00:23:55.109 EGE Aggregate Log Change Notices: Not Supported 00:23:55.109 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.109 Zone Descriptor Change Notices: Not Supported 00:23:55.109 Discovery Log Change Notices: Not Supported 00:23:55.109 Controller Attributes 00:23:55.109 128-bit Host Identifier: Supported 00:23:55.109 Non-Operational Permissive Mode: Not Supported 00:23:55.109 NVM Sets: Not Supported 00:23:55.109 Read Recovery Levels: Not Supported 00:23:55.109 Endurance Groups: Not Supported 00:23:55.109 Predictable Latency Mode: Not Supported 00:23:55.109 Traffic Based Keep ALive: Not Supported 00:23:55.109 Namespace Granularity: Not Supported 00:23:55.109 SQ Associations: Not Supported 00:23:55.109 UUID List: Not Supported 00:23:55.109 Multi-Domain Subsystem: Not Supported 00:23:55.109 Fixed Capacity Management: Not Supported 00:23:55.109 Variable Capacity Management: Not Supported 00:23:55.109 Delete Endurance Group: Not Supported 00:23:55.109 Delete NVM Set: Not Supported 00:23:55.109 Extended LBA Formats Supported: Not Supported 00:23:55.109 Flexible Data Placement Supported: Not Supported 00:23:55.109 00:23:55.109 Controller Memory Buffer Support 00:23:55.109 ================================ 00:23:55.109 Supported: No 00:23:55.109 00:23:55.109 Persistent Memory Region Support 00:23:55.109 ================================ 00:23:55.109 Supported: No 00:23:55.109 00:23:55.109 Admin Command Set Attributes 00:23:55.109 ============================ 00:23:55.109 Security Send/Receive: Not Supported 00:23:55.109 Format NVM: Not Supported 00:23:55.109 Firmware Activate/Download: Not Supported 00:23:55.109 Namespace Management: Not Supported 00:23:55.109 Device Self-Test: Not Supported 00:23:55.109 Directives: Not Supported 00:23:55.109 NVMe-MI: Not Supported 00:23:55.109 Virtualization Management: Not Supported 00:23:55.109 Doorbell Buffer Config: Not Supported 00:23:55.109 Get LBA Status Capability: Not Supported 00:23:55.109 Command & Feature Lockdown Capability: Not Supported 00:23:55.109 Abort Command Limit: 4 00:23:55.109 Async Event Request Limit: 4 00:23:55.109 Number of Firmware Slots: N/A 00:23:55.109 Firmware Slot 1 Read-Only: N/A 00:23:55.109 Firmware Activation Without Reset: N/A 00:23:55.109 Multiple Update Detection Support: N/A 00:23:55.109 Firmware Update Granularity: No Information Provided 00:23:55.109 Per-Namespace SMART Log: No 00:23:55.109 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.109 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:55.109 Command Effects Log Page: Supported 00:23:55.109 Get Log Page Extended Data: Supported 00:23:55.109 Telemetry Log Pages: Not Supported 00:23:55.109 Persistent Event Log Pages: Not Supported 00:23:55.109 Supported Log Pages Log Page: May Support 00:23:55.109 Commands Supported & Effects Log Page: Not Supported 00:23:55.109 Feature Identifiers & Effects Log Page:May Support 00:23:55.109 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.109 Data Area 4 for Telemetry Log: Not Supported 00:23:55.109 Error Log Page Entries Supported: 128 00:23:55.109 Keep Alive: Supported 00:23:55.109 Keep Alive Granularity: 10000 ms 00:23:55.109 00:23:55.109 NVM Command Set Attributes 00:23:55.109 ========================== 00:23:55.109 Submission Queue Entry Size 00:23:55.109 Max: 64 00:23:55.109 Min: 64 00:23:55.109 Completion Queue Entry Size 00:23:55.109 Max: 16 00:23:55.109 Min: 16 00:23:55.109 Number of Namespaces: 32 00:23:55.109 Compare Command: Supported 00:23:55.109 Write Uncorrectable Command: Not Supported 00:23:55.109 Dataset Management Command: Supported 00:23:55.109 Write Zeroes Command: Supported 00:23:55.109 Set Features Save Field: Not Supported 00:23:55.109 Reservations: Supported 00:23:55.109 Timestamp: Not Supported 00:23:55.109 Copy: Supported 00:23:55.109 Volatile Write Cache: Present 00:23:55.109 Atomic Write Unit (Normal): 1 00:23:55.109 Atomic Write Unit (PFail): 1 00:23:55.109 Atomic Compare & Write Unit: 1 00:23:55.109 Fused Compare & Write: Supported 00:23:55.109 Scatter-Gather List 00:23:55.109 SGL Command Set: Supported 00:23:55.109 SGL Keyed: Supported 00:23:55.109 SGL Bit Bucket Descriptor: Not Supported 00:23:55.109 SGL Metadata Pointer: Not Supported 00:23:55.109 Oversized SGL: Not Supported 00:23:55.109 SGL Metadata Address: Not Supported 00:23:55.109 SGL Offset: Supported 00:23:55.109 Transport SGL Data Block: Not Supported 00:23:55.109 Replay Protected Memory Block: Not Supported 00:23:55.109 00:23:55.109 Firmware Slot Information 00:23:55.109 ========================= 00:23:55.109 Active slot: 1 00:23:55.109 Slot 1 Firmware Revision: 25.01 00:23:55.109 00:23:55.109 00:23:55.109 Commands Supported and Effects 00:23:55.109 ============================== 00:23:55.109 Admin Commands 00:23:55.109 -------------- 00:23:55.109 Get Log Page (02h): Supported 00:23:55.109 Identify (06h): Supported 00:23:55.109 Abort (08h): Supported 00:23:55.109 Set Features (09h): Supported 00:23:55.109 Get Features (0Ah): Supported 00:23:55.109 Asynchronous Event Request (0Ch): Supported 00:23:55.109 Keep Alive (18h): Supported 00:23:55.109 I/O Commands 00:23:55.109 ------------ 00:23:55.109 Flush (00h): Supported LBA-Change 00:23:55.109 Write (01h): Supported LBA-Change 00:23:55.109 Read (02h): Supported 00:23:55.109 Compare (05h): Supported 00:23:55.109 Write Zeroes (08h): Supported LBA-Change 00:23:55.109 Dataset Management (09h): Supported LBA-Change 00:23:55.109 Copy (19h): Supported LBA-Change 00:23:55.109 00:23:55.109 Error Log 00:23:55.109 ========= 00:23:55.109 00:23:55.109 Arbitration 00:23:55.109 =========== 00:23:55.109 Arbitration Burst: 1 00:23:55.109 00:23:55.109 Power Management 00:23:55.109 ================ 00:23:55.109 Number of Power States: 1 00:23:55.109 Current Power State: Power State #0 00:23:55.109 Power State #0: 00:23:55.109 Max Power: 0.00 W 00:23:55.109 Non-Operational State: Operational 00:23:55.109 Entry Latency: Not Reported 00:23:55.109 Exit Latency: Not Reported 00:23:55.109 Relative Read Throughput: 0 00:23:55.109 Relative Read Latency: 0 00:23:55.109 Relative Write Throughput: 0 00:23:55.109 Relative Write Latency: 0 00:23:55.109 Idle Power: Not Reported 00:23:55.109 Active Power: Not Reported 00:23:55.109 Non-Operational Permissive Mode: Not Supported 00:23:55.109 00:23:55.109 Health Information 00:23:55.109 ================== 00:23:55.109 Critical Warnings: 00:23:55.109 Available Spare Space: OK 00:23:55.109 Temperature: OK 00:23:55.109 Device Reliability: OK 00:23:55.109 Read Only: No 00:23:55.109 Volatile Memory Backup: OK 00:23:55.109 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:55.109 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:55.109 Available Spare: 0% 00:23:55.109 Available Spare Threshold: 0% 00:23:55.109 Life Percentage Used:[2024-12-10 05:48:42.921336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.109 [2024-12-10 05:48:42.921340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1196690) 00:23:55.109 [2024-12-10 05:48:42.921346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.109 [2024-12-10 05:48:42.921356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8b80, cid 7, qid 0 00:23:55.109 [2024-12-10 05:48:42.921440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.109 [2024-12-10 05:48:42.921446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.109 [2024-12-10 05:48:42.921449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.109 [2024-12-10 05:48:42.921452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8b80) on tqpair=0x1196690 00:23:55.109 [2024-12-10 05:48:42.921479] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:55.109 [2024-12-10 05:48:42.921489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8100) on tqpair=0x1196690 00:23:55.109 [2024-12-10 05:48:42.921494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.109 [2024-12-10 05:48:42.921499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8280) on tqpair=0x1196690 00:23:55.109 [2024-12-10 05:48:42.921503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.109 [2024-12-10 05:48:42.921507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8400) on tqpair=0x1196690 00:23:55.109 [2024-12-10 05:48:42.921510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.109 [2024-12-10 05:48:42.921515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.109 [2024-12-10 05:48:42.921519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.109 [2024-12-10 05:48:42.921525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.921538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.921550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.921612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.921617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.921620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.110 [2024-12-10 05:48:42.921628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.921640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.921652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.921726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.921732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.921735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.110 [2024-12-10 05:48:42.921742] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:55.110 [2024-12-10 05:48:42.921746] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:55.110 [2024-12-10 05:48:42.921755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.921767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.921777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.921845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.921850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.921853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.110 [2024-12-10 05:48:42.921864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.921876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.921885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.921963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.921969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.921972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.110 [2024-12-10 05:48:42.921983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.921992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.921998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.922007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.922079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.922084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.922087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.110 [2024-12-10 05:48:42.922098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.922110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.922119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.922185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.922191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.922194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.110 [2024-12-10 05:48:42.922206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.922217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.922226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.922290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.922295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.922298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.110 [2024-12-10 05:48:42.922309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.922321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.922330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.922408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.922413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.922416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.110 [2024-12-10 05:48:42.922427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.922440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.922449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.922530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.922536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.922539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.110 [2024-12-10 05:48:42.922550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.922561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.922571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.922631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.922636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.922639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.110 [2024-12-10 05:48:42.922651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.922663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.922672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.922733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.922739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.922741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.110 [2024-12-10 05:48:42.922753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.110 [2024-12-10 05:48:42.922764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.110 [2024-12-10 05:48:42.922774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.110 [2024-12-10 05:48:42.922852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.110 [2024-12-10 05:48:42.922857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.110 [2024-12-10 05:48:42.922860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.110 [2024-12-10 05:48:42.922863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.922871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.922875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.922878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.922883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.922894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.922969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.922974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.922977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.922980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.922988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.922991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.922994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.923000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.923009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.923074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.923079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.923082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.923094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.923105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.923115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.923176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.923182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.923185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.923195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.923207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.923216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.923293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.923298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.923302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.923312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.923324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.923335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.923410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.923415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.923418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.923429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.923441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.923450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.923519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.923524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.923527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.923539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.923550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.923560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.923627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.923632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.923635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.923646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.923658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.923667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.923744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.923750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.923753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.923763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.923775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.923784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.923864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.923869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.923872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.923883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.923895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.923904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.923963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.923969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.923972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.923984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.923990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.923995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.924004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.924067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.924073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.924076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.924079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.924086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.924090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.924093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.111 [2024-12-10 05:48:42.924098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.111 [2024-12-10 05:48:42.924107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.111 [2024-12-10 05:48:42.924184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.111 [2024-12-10 05:48:42.924190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.111 [2024-12-10 05:48:42.924193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.924196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.111 [2024-12-10 05:48:42.924204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.111 [2024-12-10 05:48:42.924207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.112 [2024-12-10 05:48:42.924216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.112 [2024-12-10 05:48:42.924225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.112 [2024-12-10 05:48:42.924300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.112 [2024-12-10 05:48:42.924307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.112 [2024-12-10 05:48:42.924310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.112 [2024-12-10 05:48:42.924321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.112 [2024-12-10 05:48:42.924333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.112 [2024-12-10 05:48:42.924342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.112 [2024-12-10 05:48:42.924402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.112 [2024-12-10 05:48:42.924408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.112 [2024-12-10 05:48:42.924411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.112 [2024-12-10 05:48:42.924423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.112 [2024-12-10 05:48:42.924435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.112 [2024-12-10 05:48:42.924443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.112 [2024-12-10 05:48:42.924502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.112 [2024-12-10 05:48:42.924508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.112 [2024-12-10 05:48:42.924510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.112 [2024-12-10 05:48:42.924521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.112 [2024-12-10 05:48:42.924533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.112 [2024-12-10 05:48:42.924542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.112 [2024-12-10 05:48:42.924604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.112 [2024-12-10 05:48:42.924610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.112 [2024-12-10 05:48:42.924613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.112 [2024-12-10 05:48:42.924624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.112 [2024-12-10 05:48:42.924636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.112 [2024-12-10 05:48:42.924645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.112 [2024-12-10 05:48:42.924723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.112 [2024-12-10 05:48:42.924728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.112 [2024-12-10 05:48:42.924733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.112 [2024-12-10 05:48:42.924744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.112 [2024-12-10 05:48:42.924755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.112 [2024-12-10 05:48:42.924765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.112 [2024-12-10 05:48:42.924832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.112 [2024-12-10 05:48:42.924837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.112 [2024-12-10 05:48:42.924840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.112 [2024-12-10 05:48:42.924852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.112 [2024-12-10 05:48:42.924864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.112 [2024-12-10 05:48:42.924872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.112 [2024-12-10 05:48:42.924939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.112 [2024-12-10 05:48:42.924945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.112 [2024-12-10 05:48:42.924947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.112 [2024-12-10 05:48:42.924959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.924965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.112 [2024-12-10 05:48:42.924970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.112 [2024-12-10 05:48:42.924979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.112 [2024-12-10 05:48:42.925054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.112 [2024-12-10 05:48:42.925060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.112 [2024-12-10 05:48:42.925063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.925066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.112 [2024-12-10 05:48:42.925074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.925077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.925080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.112 [2024-12-10 05:48:42.925085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.112 [2024-12-10 05:48:42.925094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.112 [2024-12-10 05:48:42.925174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.112 [2024-12-10 05:48:42.925180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.112 [2024-12-10 05:48:42.925183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.925188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.112 [2024-12-10 05:48:42.925196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.925199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.925202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.112 [2024-12-10 05:48:42.925207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.112 [2024-12-10 05:48:42.925216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.112 [2024-12-10 05:48:42.925282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.112 [2024-12-10 05:48:42.925288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.112 [2024-12-10 05:48:42.925291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.925294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.112 [2024-12-10 05:48:42.925303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.925306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.925309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.112 [2024-12-10 05:48:42.925314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.112 [2024-12-10 05:48:42.925324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.112 [2024-12-10 05:48:42.925388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.112 [2024-12-10 05:48:42.925393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.112 [2024-12-10 05:48:42.925396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.112 [2024-12-10 05:48:42.925400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.113 [2024-12-10 05:48:42.925407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925414] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.113 [2024-12-10 05:48:42.925419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.113 [2024-12-10 05:48:42.925428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.113 [2024-12-10 05:48:42.925507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.113 [2024-12-10 05:48:42.925512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.113 [2024-12-10 05:48:42.925515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.113 [2024-12-10 05:48:42.925526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.113 [2024-12-10 05:48:42.925538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.113 [2024-12-10 05:48:42.925548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.113 [2024-12-10 05:48:42.925622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.113 [2024-12-10 05:48:42.925628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.113 [2024-12-10 05:48:42.925631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.113 [2024-12-10 05:48:42.925643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.113 [2024-12-10 05:48:42.925655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.113 [2024-12-10 05:48:42.925664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.113 [2024-12-10 05:48:42.925724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.113 [2024-12-10 05:48:42.925729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.113 [2024-12-10 05:48:42.925732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.113 [2024-12-10 05:48:42.925744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.113 [2024-12-10 05:48:42.925756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.113 [2024-12-10 05:48:42.925765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.113 [2024-12-10 05:48:42.925824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.113 [2024-12-10 05:48:42.925829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.113 [2024-12-10 05:48:42.925832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.113 [2024-12-10 05:48:42.925843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.113 [2024-12-10 05:48:42.925855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.113 [2024-12-10 05:48:42.925864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.113 [2024-12-10 05:48:42.925943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.113 [2024-12-10 05:48:42.925948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.113 [2024-12-10 05:48:42.925951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.113 [2024-12-10 05:48:42.925962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.925969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.113 [2024-12-10 05:48:42.925974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.113 [2024-12-10 05:48:42.925983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.113 [2024-12-10 05:48:42.926058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.113 [2024-12-10 05:48:42.926063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.113 [2024-12-10 05:48:42.926066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.926069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.113 [2024-12-10 05:48:42.926077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.926082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.926085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.113 [2024-12-10 05:48:42.926091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.113 [2024-12-10 05:48:42.926100] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.113 [2024-12-10 05:48:42.926164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.113 [2024-12-10 05:48:42.930177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.113 [2024-12-10 05:48:42.930180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.930183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.113 [2024-12-10 05:48:42.930193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.930197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.930200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1196690) 00:23:55.113 [2024-12-10 05:48:42.930205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.113 [2024-12-10 05:48:42.930216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f8580, cid 3, qid 0 00:23:55.113 [2024-12-10 05:48:42.930276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.113 [2024-12-10 05:48:42.930281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.113 [2024-12-10 05:48:42.930284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.113 [2024-12-10 05:48:42.930287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f8580) on tqpair=0x1196690 00:23:55.113 [2024-12-10 05:48:42.930294] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 8 milliseconds 00:23:55.113 0% 00:23:55.113 Data Units Read: 0 00:23:55.113 Data Units Written: 0 00:23:55.113 Host Read Commands: 0 00:23:55.113 Host Write Commands: 0 00:23:55.113 Controller Busy Time: 0 minutes 00:23:55.113 Power Cycles: 0 00:23:55.113 Power On Hours: 0 hours 00:23:55.113 Unsafe Shutdowns: 0 00:23:55.113 Unrecoverable Media Errors: 0 00:23:55.113 Lifetime Error Log Entries: 0 00:23:55.113 Warning Temperature Time: 0 minutes 00:23:55.113 Critical Temperature Time: 0 minutes 00:23:55.113 00:23:55.113 Number of Queues 00:23:55.113 ================ 00:23:55.113 Number of I/O Submission Queues: 127 00:23:55.113 Number of I/O Completion Queues: 127 00:23:55.113 00:23:55.113 Active Namespaces 00:23:55.113 ================= 00:23:55.113 Namespace ID:1 00:23:55.113 Error Recovery Timeout: Unlimited 00:23:55.113 Command Set Identifier: NVM (00h) 00:23:55.113 Deallocate: Supported 00:23:55.113 Deallocated/Unwritten Error: Not Supported 00:23:55.113 Deallocated Read Value: Unknown 00:23:55.113 Deallocate in Write Zeroes: Not Supported 00:23:55.113 Deallocated Guard Field: 0xFFFF 00:23:55.113 Flush: Supported 00:23:55.113 Reservation: Supported 00:23:55.113 Namespace Sharing Capabilities: Multiple Controllers 00:23:55.113 Size (in LBAs): 131072 (0GiB) 00:23:55.113 Capacity (in LBAs): 131072 (0GiB) 00:23:55.113 Utilization (in LBAs): 131072 (0GiB) 00:23:55.113 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:55.113 EUI64: ABCDEF0123456789 00:23:55.113 UUID: 53469ed9-13ba-43a6-8f9f-30d851288235 00:23:55.113 Thin Provisioning: Not Supported 00:23:55.113 Per-NS Atomic Units: Yes 00:23:55.113 Atomic Boundary Size (Normal): 0 00:23:55.113 Atomic Boundary Size (PFail): 0 00:23:55.113 Atomic Boundary Offset: 0 00:23:55.113 Maximum Single Source Range Length: 65535 00:23:55.113 Maximum Copy Length: 65535 00:23:55.113 Maximum Source Range Count: 1 00:23:55.113 NGUID/EUI64 Never Reused: No 00:23:55.113 Namespace Write Protected: No 00:23:55.113 Number of LBA Formats: 1 00:23:55.113 Current LBA Format: LBA Format #00 00:23:55.113 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:55.113 00:23:55.113 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:55.113 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.113 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.113 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.113 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.113 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:55.113 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:55.113 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:55.113 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:55.113 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:55.113 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:55.114 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:55.114 05:48:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:55.114 rmmod nvme_tcp 00:23:55.114 rmmod nvme_fabrics 00:23:55.371 rmmod nvme_keyring 00:23:55.371 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:55.371 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:55.371 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:55.371 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1273609 ']' 00:23:55.371 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1273609 00:23:55.371 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1273609 ']' 00:23:55.371 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1273609 00:23:55.371 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1273609 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1273609' 00:23:55.372 killing process with pid 1273609 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1273609 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1273609 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.372 05:48:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:57.906 00:23:57.906 real 0m9.351s 00:23:57.906 user 0m5.775s 00:23:57.906 sys 0m4.820s 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:57.906 ************************************ 00:23:57.906 END TEST nvmf_identify 00:23:57.906 ************************************ 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.906 ************************************ 00:23:57.906 START TEST nvmf_perf 00:23:57.906 ************************************ 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:57.906 * Looking for test storage... 00:23:57.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:57.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.906 --rc genhtml_branch_coverage=1 00:23:57.906 --rc genhtml_function_coverage=1 00:23:57.906 --rc genhtml_legend=1 00:23:57.906 --rc geninfo_all_blocks=1 00:23:57.906 --rc geninfo_unexecuted_blocks=1 00:23:57.906 00:23:57.906 ' 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:57.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.906 --rc genhtml_branch_coverage=1 00:23:57.906 --rc genhtml_function_coverage=1 00:23:57.906 --rc genhtml_legend=1 00:23:57.906 --rc geninfo_all_blocks=1 00:23:57.906 --rc geninfo_unexecuted_blocks=1 00:23:57.906 00:23:57.906 ' 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:57.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.906 --rc genhtml_branch_coverage=1 00:23:57.906 --rc genhtml_function_coverage=1 00:23:57.906 --rc genhtml_legend=1 00:23:57.906 --rc geninfo_all_blocks=1 00:23:57.906 --rc geninfo_unexecuted_blocks=1 00:23:57.906 00:23:57.906 ' 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:57.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.906 --rc genhtml_branch_coverage=1 00:23:57.906 --rc genhtml_function_coverage=1 00:23:57.906 --rc genhtml_legend=1 00:23:57.906 --rc geninfo_all_blocks=1 00:23:57.906 --rc geninfo_unexecuted_blocks=1 00:23:57.906 00:23:57.906 ' 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:57.906 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:57.907 05:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:04.477 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:04.477 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:04.477 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:04.478 Found net devices under 0000:af:00.0: cvl_0_0 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:04.478 Found net devices under 0000:af:00.1: cvl_0_1 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:04.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:24:04.478 00:24:04.478 --- 10.0.0.2 ping statistics --- 00:24:04.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.478 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:24:04.478 00:24:04.478 --- 10.0.0.1 ping statistics --- 00:24:04.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.478 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1277166 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1277166 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1277166 ']' 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:04.478 [2024-12-10 05:48:51.555073] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:24:04.478 [2024-12-10 05:48:51.555117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.478 [2024-12-10 05:48:51.633491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.478 [2024-12-10 05:48:51.673941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.478 [2024-12-10 05:48:51.673976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.478 [2024-12-10 05:48:51.673983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.478 [2024-12-10 05:48:51.673990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.478 [2024-12-10 05:48:51.673994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.478 [2024-12-10 05:48:51.675394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.478 [2024-12-10 05:48:51.675500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.478 [2024-12-10 05:48:51.675609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.478 [2024-12-10 05:48:51.675609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:04.478 05:48:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:07.011 05:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:07.011 05:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:07.269 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:24:07.269 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:07.527 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:07.528 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:07.528 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:07.528 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:07.528 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:07.786 [2024-12-10 05:48:55.440639] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.786 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:07.786 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:07.786 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.044 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:08.044 05:48:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:08.303 05:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.562 [2024-12-10 05:48:56.232779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.562 05:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:08.562 05:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:08.562 05:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:08.562 05:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:08.562 05:48:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:09.938 Initializing NVMe Controllers 00:24:09.938 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:09.938 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:09.938 Initialization complete. Launching workers. 00:24:09.938 ======================================================== 00:24:09.938 Latency(us) 00:24:09.938 Device Information : IOPS MiB/s Average min max 00:24:09.938 PCIE (0000:5e:00.0) NSID 1 from core 0: 99164.00 387.36 322.44 34.34 5514.92 00:24:09.938 ======================================================== 00:24:09.938 Total : 99164.00 387.36 322.44 34.34 5514.92 00:24:09.938 00:24:09.938 05:48:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.315 Initializing NVMe Controllers 00:24:11.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:11.315 Initialization complete. Launching workers. 00:24:11.315 ======================================================== 00:24:11.315 Latency(us) 00:24:11.315 Device Information : IOPS MiB/s Average min max 00:24:11.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 57.97 0.23 17686.31 108.20 45712.56 00:24:11.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.97 0.18 22774.86 7198.78 50879.33 00:24:11.315 ======================================================== 00:24:11.315 Total : 103.94 0.41 19937.02 108.20 50879.33 00:24:11.315 00:24:11.315 05:48:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:12.691 Initializing NVMe Controllers 00:24:12.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:12.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:12.691 Initialization complete. Launching workers. 00:24:12.691 ======================================================== 00:24:12.691 Latency(us) 00:24:12.691 Device Information : IOPS MiB/s Average min max 00:24:12.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11169.00 43.63 2865.22 468.77 7621.98 00:24:12.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3860.00 15.08 8332.22 5506.47 16227.83 00:24:12.691 ======================================================== 00:24:12.691 Total : 15029.00 58.71 4269.35 468.77 16227.83 00:24:12.691 00:24:12.691 05:49:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:12.691 05:49:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:12.691 05:49:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:15.226 Initializing NVMe Controllers 00:24:15.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.226 Controller IO queue size 128, less than required. 00:24:15.226 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.226 Controller IO queue size 128, less than required. 00:24:15.226 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:15.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:15.226 Initialization complete. Launching workers. 00:24:15.226 ======================================================== 00:24:15.226 Latency(us) 00:24:15.226 Device Information : IOPS MiB/s Average min max 00:24:15.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1843.92 460.98 70568.78 48928.03 135798.50 00:24:15.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.34 145.09 223482.91 78629.24 372866.83 00:24:15.226 ======================================================== 00:24:15.226 Total : 2424.26 606.06 107174.92 48928.03 372866.83 00:24:15.226 00:24:15.226 05:49:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:15.485 No valid NVMe controllers or AIO or URING devices found 00:24:15.485 Initializing NVMe Controllers 00:24:15.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.485 Controller IO queue size 128, less than required. 00:24:15.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.485 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:15.485 Controller IO queue size 128, less than required. 00:24:15.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.485 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:15.485 WARNING: Some requested NVMe devices were skipped 00:24:15.485 05:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:18.019 Initializing NVMe Controllers 00:24:18.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:18.019 Controller IO queue size 128, less than required. 00:24:18.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:18.019 Controller IO queue size 128, less than required. 00:24:18.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:18.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:18.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:18.019 Initialization complete. Launching workers. 00:24:18.019 00:24:18.019 ==================== 00:24:18.019 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:18.019 TCP transport: 00:24:18.019 polls: 16670 00:24:18.019 idle_polls: 13078 00:24:18.019 sock_completions: 3592 00:24:18.019 nvme_completions: 6103 00:24:18.019 submitted_requests: 9066 00:24:18.019 queued_requests: 1 00:24:18.019 00:24:18.019 ==================== 00:24:18.019 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:18.019 TCP transport: 00:24:18.019 polls: 11906 00:24:18.019 idle_polls: 7936 00:24:18.019 sock_completions: 3970 00:24:18.019 nvme_completions: 6827 00:24:18.019 submitted_requests: 10236 00:24:18.019 queued_requests: 1 00:24:18.019 ======================================================== 00:24:18.019 Latency(us) 00:24:18.019 Device Information : IOPS MiB/s Average min max 00:24:18.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1522.82 380.70 86533.17 54517.59 170192.14 00:24:18.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1703.50 425.87 75821.00 45995.14 102553.08 00:24:18.019 ======================================================== 00:24:18.019 Total : 3226.32 806.58 80877.13 45995.14 170192.14 00:24:18.019 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:18.019 rmmod nvme_tcp 00:24:18.019 rmmod nvme_fabrics 00:24:18.019 rmmod nvme_keyring 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1277166 ']' 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1277166 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1277166 ']' 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1277166 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.019 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1277166 00:24:18.278 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:18.278 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:18.278 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1277166' 00:24:18.278 killing process with pid 1277166 00:24:18.278 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1277166 00:24:18.278 05:49:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1277166 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.654 05:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.559 05:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:21.559 00:24:21.559 real 0m24.047s 00:24:21.559 user 1m2.339s 00:24:21.559 sys 0m8.292s 00:24:21.559 05:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 ************************************ 00:24:21.819 END TEST nvmf_perf 00:24:21.819 ************************************ 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 ************************************ 00:24:21.819 START TEST nvmf_fio_host 00:24:21.819 ************************************ 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:21.819 * Looking for test storage... 00:24:21.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:21.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.819 --rc genhtml_branch_coverage=1 00:24:21.819 --rc genhtml_function_coverage=1 00:24:21.819 --rc genhtml_legend=1 00:24:21.819 --rc geninfo_all_blocks=1 00:24:21.819 --rc geninfo_unexecuted_blocks=1 00:24:21.819 00:24:21.819 ' 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:21.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.819 --rc genhtml_branch_coverage=1 00:24:21.819 --rc genhtml_function_coverage=1 00:24:21.819 --rc genhtml_legend=1 00:24:21.819 --rc geninfo_all_blocks=1 00:24:21.819 --rc geninfo_unexecuted_blocks=1 00:24:21.819 00:24:21.819 ' 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:21.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.819 --rc genhtml_branch_coverage=1 00:24:21.819 --rc genhtml_function_coverage=1 00:24:21.819 --rc genhtml_legend=1 00:24:21.819 --rc geninfo_all_blocks=1 00:24:21.819 --rc geninfo_unexecuted_blocks=1 00:24:21.819 00:24:21.819 ' 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:21.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.819 --rc genhtml_branch_coverage=1 00:24:21.819 --rc genhtml_function_coverage=1 00:24:21.819 --rc genhtml_legend=1 00:24:21.819 --rc geninfo_all_blocks=1 00:24:21.819 --rc geninfo_unexecuted_blocks=1 00:24:21.819 00:24:21.819 ' 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.819 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:21.820 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.820 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.820 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.820 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.820 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.820 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.820 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.820 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.820 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.820 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:22.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:22.079 05:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:28.648 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:28.648 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.648 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:28.649 Found net devices under 0000:af:00.0: cvl_0_0 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:28.649 Found net devices under 0000:af:00.1: cvl_0_1 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:24:28.649 00:24:28.649 --- 10.0.0.2 ping statistics --- 00:24:28.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.649 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:24:28.649 00:24:28.649 --- 10.0.0.1 ping statistics --- 00:24:28.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.649 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1283201 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1283201 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1283201 ']' 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.649 [2024-12-10 05:49:15.650705] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:24:28.649 [2024-12-10 05:49:15.650750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.649 [2024-12-10 05:49:15.726932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.649 [2024-12-10 05:49:15.767818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.649 [2024-12-10 05:49:15.767853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.649 [2024-12-10 05:49:15.767861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.649 [2024-12-10 05:49:15.767867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.649 [2024-12-10 05:49:15.767872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.649 [2024-12-10 05:49:15.769228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.649 [2024-12-10 05:49:15.769339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.649 [2024-12-10 05:49:15.769445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.649 [2024-12-10 05:49:15.769447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:28.649 05:49:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:28.649 [2024-12-10 05:49:16.031645] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.649 05:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:28.649 05:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.649 05:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.649 05:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:28.649 Malloc1 00:24:28.649 05:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.649 05:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:28.908 05:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.166 [2024-12-10 05:49:16.872896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.166 05:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:29.425 05:49:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:29.684 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:29.684 fio-3.35 00:24:29.684 Starting 1 thread 00:24:32.215 00:24:32.215 test: (groupid=0, jobs=1): err= 0: pid=1283724: Tue Dec 10 05:49:19 2024 00:24:32.215 read: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(93.8MiB/2005msec) 00:24:32.215 slat (nsec): min=1573, max=235360, avg=1806.01, stdev=2153.09 00:24:32.215 clat (usec): min=3105, max=10520, avg=5898.60, stdev=479.90 00:24:32.215 lat (usec): min=3140, max=10522, avg=5900.41, stdev=479.89 00:24:32.215 clat percentiles (usec): 00:24:32.215 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:24:32.215 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:24:32.215 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:24:32.215 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 8586], 99.95th=[ 9241], 00:24:32.215 | 99.99th=[10421] 00:24:32.215 bw ( KiB/s): min=47104, max=48368, per=99.99%, avg=47910.00, stdev=569.72, samples=4 00:24:32.215 iops : min=11776, max=12092, avg=11977.50, stdev=142.43, samples=4 00:24:32.215 write: IOPS=11.9k, BW=46.6MiB/s (48.9MB/s)(93.4MiB/2005msec); 0 zone resets 00:24:32.215 slat (nsec): min=1607, max=224128, avg=1866.79, stdev=1629.06 00:24:32.215 clat (usec): min=2405, max=8982, avg=4775.47, stdev=386.19 00:24:32.215 lat (usec): min=2420, max=8984, avg=4777.34, stdev=386.28 00:24:32.215 clat percentiles (usec): 00:24:32.215 | 1.00th=[ 3884], 5.00th=[ 4228], 10.00th=[ 4293], 20.00th=[ 4490], 00:24:32.215 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:24:32.215 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:24:32.215 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 7570], 99.95th=[ 7832], 00:24:32.215 | 99.99th=[ 8586] 00:24:32.215 bw ( KiB/s): min=47104, max=48256, per=99.98%, avg=47712.00, stdev=471.75, samples=4 00:24:32.215 iops : min=11776, max=12064, avg=11928.00, stdev=117.94, samples=4 00:24:32.215 lat (msec) : 4=1.03%, 10=98.95%, 20=0.02% 00:24:32.215 cpu : usr=75.70%, sys=23.15%, ctx=105, majf=0, minf=2 00:24:32.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:32.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:32.215 issued rwts: total=24018,23920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:32.215 00:24:32.215 Run status group 0 (all jobs): 00:24:32.215 READ: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=93.8MiB (98.4MB), run=2005-2005msec 00:24:32.215 WRITE: bw=46.6MiB/s (48.9MB/s), 46.6MiB/s-46.6MiB/s (48.9MB/s-48.9MB/s), io=93.4MiB (98.0MB), run=2005-2005msec 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:32.215 05:49:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:32.215 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:32.215 fio-3.35 00:24:32.215 Starting 1 thread 00:24:34.749 00:24:34.749 test: (groupid=0, jobs=1): err= 0: pid=1284277: Tue Dec 10 05:49:22 2024 00:24:34.749 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(339MiB/2006msec) 00:24:34.749 slat (nsec): min=2483, max=86438, avg=2784.34, stdev=1219.00 00:24:34.749 clat (usec): min=1322, max=50882, avg=6879.68, stdev=3440.40 00:24:34.749 lat (usec): min=1324, max=50885, avg=6882.46, stdev=3440.47 00:24:34.749 clat percentiles (usec): 00:24:34.749 | 1.00th=[ 3621], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5276], 00:24:34.749 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 7046], 00:24:34.749 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9372], 00:24:34.749 | 99.00th=[11338], 99.50th=[43779], 99.90th=[50070], 99.95th=[50594], 00:24:34.749 | 99.99th=[50594] 00:24:34.749 bw ( KiB/s): min=78080, max=92608, per=51.02%, avg=88208.00, stdev=6905.55, samples=4 00:24:34.749 iops : min= 4880, max= 5788, avg=5513.00, stdev=431.60, samples=4 00:24:34.749 write: IOPS=6421, BW=100MiB/s (105MB/s)(180MiB/1798msec); 0 zone resets 00:24:34.749 slat (usec): min=29, max=387, avg=31.26, stdev= 6.94 00:24:34.749 clat (usec): min=3271, max=14312, avg=8542.34, stdev=1490.57 00:24:34.749 lat (usec): min=3304, max=14342, avg=8573.60, stdev=1491.71 00:24:34.749 clat percentiles (usec): 00:24:34.749 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7242], 00:24:34.749 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:24:34.749 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11207], 00:24:34.749 | 99.00th=[12649], 99.50th=[13304], 99.90th=[13829], 99.95th=[14091], 00:24:34.749 | 99.99th=[14222] 00:24:34.749 bw ( KiB/s): min=81568, max=96256, per=89.42%, avg=91864.00, stdev=6923.61, samples=4 00:24:34.749 iops : min= 5098, max= 6016, avg=5741.50, stdev=432.73, samples=4 00:24:34.749 lat (msec) : 2=0.05%, 4=1.79%, 10=90.52%, 20=7.26%, 50=0.33% 00:24:34.749 lat (msec) : 100=0.06% 00:24:34.749 cpu : usr=87.23%, sys=12.07%, ctx=56, majf=0, minf=2 00:24:34.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:34.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:34.749 issued rwts: total=21676,11545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:34.749 00:24:34.749 Run status group 0 (all jobs): 00:24:34.749 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=339MiB (355MB), run=2006-2006msec 00:24:34.749 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=180MiB (189MB), run=1798-1798msec 00:24:34.749 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.008 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:35.008 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:35.008 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:35.008 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:35.008 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:35.008 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:35.008 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:35.008 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:35.009 rmmod nvme_tcp 00:24:35.009 rmmod nvme_fabrics 00:24:35.009 rmmod nvme_keyring 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1283201 ']' 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1283201 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1283201 ']' 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1283201 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1283201 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1283201' 00:24:35.009 killing process with pid 1283201 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1283201 00:24:35.009 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1283201 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.268 05:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.172 05:49:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:37.432 00:24:37.432 real 0m15.544s 00:24:37.432 user 0m45.549s 00:24:37.432 sys 0m6.356s 00:24:37.432 05:49:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.432 05:49:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.432 ************************************ 00:24:37.432 END TEST nvmf_fio_host 00:24:37.432 ************************************ 00:24:37.432 05:49:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:37.432 05:49:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:37.432 05:49:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.432 05:49:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.432 ************************************ 00:24:37.432 START TEST nvmf_failover 00:24:37.432 ************************************ 00:24:37.432 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:37.432 * Looking for test storage... 00:24:37.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:37.432 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:37.432 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:24:37.432 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:37.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.433 --rc genhtml_branch_coverage=1 00:24:37.433 --rc genhtml_function_coverage=1 00:24:37.433 --rc genhtml_legend=1 00:24:37.433 --rc geninfo_all_blocks=1 00:24:37.433 --rc geninfo_unexecuted_blocks=1 00:24:37.433 00:24:37.433 ' 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:37.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.433 --rc genhtml_branch_coverage=1 00:24:37.433 --rc genhtml_function_coverage=1 00:24:37.433 --rc genhtml_legend=1 00:24:37.433 --rc geninfo_all_blocks=1 00:24:37.433 --rc geninfo_unexecuted_blocks=1 00:24:37.433 00:24:37.433 ' 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:37.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.433 --rc genhtml_branch_coverage=1 00:24:37.433 --rc genhtml_function_coverage=1 00:24:37.433 --rc genhtml_legend=1 00:24:37.433 --rc geninfo_all_blocks=1 00:24:37.433 --rc geninfo_unexecuted_blocks=1 00:24:37.433 00:24:37.433 ' 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:37.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.433 --rc genhtml_branch_coverage=1 00:24:37.433 --rc genhtml_function_coverage=1 00:24:37.433 --rc genhtml_legend=1 00:24:37.433 --rc geninfo_all_blocks=1 00:24:37.433 --rc geninfo_unexecuted_blocks=1 00:24:37.433 00:24:37.433 ' 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.433 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.729 05:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:43.074 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:43.074 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:43.074 Found net devices under 0000:af:00.0: cvl_0_0 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:43.074 Found net devices under 0000:af:00.1: cvl_0_1 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.074 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.075 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.075 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.075 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.075 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.075 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.075 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.333 05:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:24:43.333 00:24:43.333 --- 10.0.0.2 ping statistics --- 00:24:43.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.333 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:24:43.333 00:24:43.333 --- 10.0.0.1 ping statistics --- 00:24:43.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.333 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1288102 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1288102 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1288102 ']' 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.333 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:43.334 [2024-12-10 05:49:31.214674] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:24:43.334 [2024-12-10 05:49:31.214718] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.593 [2024-12-10 05:49:31.291770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:43.593 [2024-12-10 05:49:31.333169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.593 [2024-12-10 05:49:31.333205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.593 [2024-12-10 05:49:31.333213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.593 [2024-12-10 05:49:31.333220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.593 [2024-12-10 05:49:31.333225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.593 [2024-12-10 05:49:31.334414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.593 [2024-12-10 05:49:31.334503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.593 [2024-12-10 05:49:31.334503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.593 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.593 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:43.593 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:43.593 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.593 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:43.593 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.593 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:43.851 [2024-12-10 05:49:31.647462] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.851 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:44.110 Malloc0 00:24:44.110 05:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.369 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:44.628 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.628 [2024-12-10 05:49:32.432119] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.628 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:44.886 [2024-12-10 05:49:32.644703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:44.886 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:45.146 [2024-12-10 05:49:32.841317] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:45.146 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1288444 00:24:45.146 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:45.146 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:45.146 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1288444 /var/tmp/bdevperf.sock 00:24:45.146 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1288444 ']' 00:24:45.146 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.146 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.146 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.146 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.146 05:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.405 05:49:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.405 05:49:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:45.405 05:49:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:45.663 NVMe0n1 00:24:45.663 05:49:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:45.922 00:24:45.922 05:49:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1288477 00:24:45.922 05:49:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.922 05:49:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:47.306 05:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.306 [2024-12-10 05:49:34.937683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498560 is same with the state(6) to be set 00:24:47.306 [2024-12-10 05:49:34.937757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498560 is same with the state(6) to be set 00:24:47.306 [2024-12-10 05:49:34.937766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498560 is same with the state(6) to be set 00:24:47.306 [2024-12-10 05:49:34.937772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498560 is same with the state(6) to be set 00:24:47.306 [2024-12-10 05:49:34.937778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498560 is same with the state(6) to be set 00:24:47.306 [2024-12-10 05:49:34.937788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498560 is same with the state(6) to be set 00:24:47.306 [2024-12-10 05:49:34.937795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498560 is same with the state(6) to be set 00:24:47.306 [2024-12-10 05:49:34.937801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498560 is same with the state(6) to be set 00:24:47.306 05:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:50.592 05:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:50.592 00:24:50.592 05:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:50.851 [2024-12-10 05:49:38.621577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 [2024-12-10 05:49:38.621788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24991c0 is same with the state(6) to be set 00:24:50.851 05:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:54.140 05:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.140 [2024-12-10 05:49:41.836520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.140 05:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:55.077 05:49:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:55.336 [2024-12-10 05:49:43.055389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.336 [2024-12-10 05:49:43.055582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 [2024-12-10 05:49:43.055759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e5710 is same with the state(6) to be set 00:24:55.337 05:49:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1288477 00:25:01.912 { 00:25:01.912 "results": [ 00:25:01.912 { 00:25:01.912 "job": "NVMe0n1", 00:25:01.912 "core_mask": "0x1", 00:25:01.912 "workload": "verify", 00:25:01.912 "status": "finished", 00:25:01.912 "verify_range": { 00:25:01.912 "start": 0, 00:25:01.912 "length": 16384 00:25:01.912 }, 00:25:01.912 "queue_depth": 128, 00:25:01.912 "io_size": 4096, 00:25:01.912 "runtime": 15.004242, 00:25:01.912 "iops": 11339.05998050418, 00:25:01.912 "mibps": 44.293203048844454, 00:25:01.912 "io_failed": 11917, 00:25:01.912 "io_timeout": 0, 00:25:01.912 "avg_latency_us": 10527.805403770948, 00:25:01.912 "min_latency_us": 417.40190476190475, 00:25:01.912 "max_latency_us": 21346.01142857143 00:25:01.912 } 00:25:01.912 ], 00:25:01.912 "core_count": 1 00:25:01.912 } 00:25:01.912 05:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1288444 00:25:01.912 05:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1288444 ']' 00:25:01.912 05:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1288444 00:25:01.912 05:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:01.912 05:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.912 05:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1288444 00:25:01.912 05:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.912 05:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.912 05:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1288444' 00:25:01.912 killing process with pid 1288444 00:25:01.912 05:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1288444 00:25:01.912 05:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1288444 00:25:01.912 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:01.912 [2024-12-10 05:49:32.912549] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:25:01.912 [2024-12-10 05:49:32.912598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288444 ] 00:25:01.912 [2024-12-10 05:49:32.987447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.912 [2024-12-10 05:49:33.027311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.912 Running I/O for 15 seconds... 00:25:01.912 11169.00 IOPS, 43.63 MiB/s [2024-12-10T04:49:49.808Z] [2024-12-10 05:49:34.940153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.912 [2024-12-10 05:49:34.940194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.912 [2024-12-10 05:49:34.940217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.912 [2024-12-10 05:49:34.940233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.912 [2024-12-10 05:49:34.940248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.912 [2024-12-10 05:49:34.940263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.912 [2024-12-10 05:49:34.940277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.912 [2024-12-10 05:49:34.940292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.912 [2024-12-10 05:49:34.940307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.912 [2024-12-10 05:49:34.940322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.912 [2024-12-10 05:49:34.940337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.912 [2024-12-10 05:49:34.940353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.912 [2024-12-10 05:49:34.940375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.912 [2024-12-10 05:49:34.940391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.912 [2024-12-10 05:49:34.940408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.912 [2024-12-10 05:49:34.940423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.912 [2024-12-10 05:49:34.940436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.912 [2024-12-10 05:49:34.940446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.913 [2024-12-10 05:49:34.940454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.940993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.940999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.941007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.941013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.941020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.941027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.941035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.941041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.913 [2024-12-10 05:49:34.941049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.913 [2024-12-10 05:49:34.941055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.914 [2024-12-10 05:49:34.941069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.914 [2024-12-10 05:49:34.941084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.914 [2024-12-10 05:49:34.941099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.914 [2024-12-10 05:49:34.941113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.914 [2024-12-10 05:49:34.941129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.914 [2024-12-10 05:49:34.941145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.914 [2024-12-10 05:49:34.941159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.914 [2024-12-10 05:49:34.941177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97984 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97992 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98000 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98008 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98016 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98024 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98032 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98040 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98048 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98056 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98064 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98072 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98080 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98088 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98096 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98104 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98112 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98120 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.914 [2024-12-10 05:49:34.941630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.914 [2024-12-10 05:49:34.941635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98128 len:8 PRP1 0x0 PRP2 0x0 00:25:01.914 [2024-12-10 05:49:34.941642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.914 [2024-12-10 05:49:34.941649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98136 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98144 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98152 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98160 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98168 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98176 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98184 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98192 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98200 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98208 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98216 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98224 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98232 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98240 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.941983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.941989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.941994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.941999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98248 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.942006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.942012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.942018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.942023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98256 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.942029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.942036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.942041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.942046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98264 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.942052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.942059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.942064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.942069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98272 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.942078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.942084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.942089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.942095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98280 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.942101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.942107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.942112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.942117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98288 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.942124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.942131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.942136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.942141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98296 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.942147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.942154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.942160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.942169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.942175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.942182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.942188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.942194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98312 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.942200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.942207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.915 [2024-12-10 05:49:34.942212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.915 [2024-12-10 05:49:34.942217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98320 len:8 PRP1 0x0 PRP2 0x0 00:25:01.915 [2024-12-10 05:49:34.942224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.915 [2024-12-10 05:49:34.942230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.942236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.942242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98328 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.942248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.942255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.942261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.942267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98336 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98344 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98352 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98360 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98368 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98376 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98384 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98392 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98400 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98408 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98416 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98424 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98432 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98440 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98448 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98456 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98464 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98472 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.916 [2024-12-10 05:49:34.953714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.916 [2024-12-10 05:49:34.953719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98480 len:8 PRP1 0x0 PRP2 0x0 00:25:01.916 [2024-12-10 05:49:34.953725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.916 [2024-12-10 05:49:34.953769] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:01.917 [2024-12-10 05:49:34.953792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.917 [2024-12-10 05:49:34.953800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:34.953807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.917 [2024-12-10 05:49:34.953814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:34.953822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.917 [2024-12-10 05:49:34.953829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:34.953836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.917 [2024-12-10 05:49:34.953842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:34.953849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:01.917 [2024-12-10 05:49:34.953887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202a5d0 (9): Bad file descriptor 00:25:01.917 [2024-12-10 05:49:34.957003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:01.917 [2024-12-10 05:49:35.021749] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:01.917 10883.50 IOPS, 42.51 MiB/s [2024-12-10T04:49:49.813Z] 11120.67 IOPS, 43.44 MiB/s [2024-12-10T04:49:49.813Z] 11274.50 IOPS, 44.04 MiB/s [2024-12-10T04:49:49.813Z] [2024-12-10 05:49:38.623471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.917 [2024-12-10 05:49:38.623979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.623987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.917 [2024-12-10 05:49:38.623997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.624006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.917 [2024-12-10 05:49:38.624014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.917 [2024-12-10 05:49:38.624023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.917 [2024-12-10 05:49:38.624030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.918 [2024-12-10 05:49:38.624639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.918 [2024-12-10 05:49:38.624645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.919 [2024-12-10 05:49:38.624666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.919 [2024-12-10 05:49:38.624681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.919 [2024-12-10 05:49:38.624696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.919 [2024-12-10 05:49:38.624710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.624747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63776 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.624754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.624768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.624775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63784 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.624782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.624793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.624799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63792 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.624806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.624817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.624822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63800 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.624829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.624842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.624847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63808 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.624853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.624865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.624870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63816 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.624882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.624894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.624900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63824 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.624906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.624917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.624922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63832 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.624935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.624948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.624953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63840 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.624960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.624971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.624976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63848 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.624983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.624990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.624995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63856 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.625007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.625013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.625018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63864 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.625030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.625036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.625042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63872 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.625053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.625060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.625065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63880 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.625077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.625084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.625090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63888 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.625103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.625109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.625114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63896 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.625127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.625134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.625139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63904 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.625153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.625159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.625164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63912 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.625180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.625186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.625191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63920 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.625203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.625209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.625215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63928 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.625226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.625233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.625238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63936 len:8 PRP1 0x0 PRP2 0x0 00:25:01.919 [2024-12-10 05:49:38.625248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.919 [2024-12-10 05:49:38.625255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.919 [2024-12-10 05:49:38.625262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.919 [2024-12-10 05:49:38.625268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63944 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63952 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63960 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63968 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63976 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63984 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63992 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64000 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64008 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64016 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64024 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64032 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64040 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64048 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64056 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.625601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.625607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.625612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.625617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64064 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.635949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.635959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.635964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.635971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64072 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.635977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.635983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.635989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.635994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64080 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.636001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.636007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.636012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.636017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64088 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.636025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.636032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.636037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.636045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64096 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.636052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.636059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.636064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.636069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64104 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.636076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.636083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.636088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.636094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64112 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.636099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.636106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.636111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.636116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64120 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.636123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.636130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.636136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.636143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64128 len:8 PRP1 0x0 PRP2 0x0 00:25:01.920 [2024-12-10 05:49:38.636149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.920 [2024-12-10 05:49:38.636156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.920 [2024-12-10 05:49:38.636161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.920 [2024-12-10 05:49:38.636170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64136 len:8 PRP1 0x0 PRP2 0x0 00:25:01.921 [2024-12-10 05:49:38.636176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:38.636183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.921 [2024-12-10 05:49:38.636189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.921 [2024-12-10 05:49:38.636194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64144 len:8 PRP1 0x0 PRP2 0x0 00:25:01.921 [2024-12-10 05:49:38.636201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:38.636208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.921 [2024-12-10 05:49:38.636213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.921 [2024-12-10 05:49:38.636218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64152 len:8 PRP1 0x0 PRP2 0x0 00:25:01.921 [2024-12-10 05:49:38.636224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:38.636285] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:01.921 [2024-12-10 05:49:38.636313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.921 [2024-12-10 05:49:38.636324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:38.636334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.921 [2024-12-10 05:49:38.636344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:38.636354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.921 [2024-12-10 05:49:38.636362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:38.636372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.921 [2024-12-10 05:49:38.636381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:38.636391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:01.921 [2024-12-10 05:49:38.636418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202a5d0 (9): Bad file descriptor 00:25:01.921 [2024-12-10 05:49:38.640156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:01.921 [2024-12-10 05:49:38.671685] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:01.921 11214.00 IOPS, 43.80 MiB/s [2024-12-10T04:49:49.817Z] 11281.83 IOPS, 44.07 MiB/s [2024-12-10T04:49:49.817Z] 11313.29 IOPS, 44.19 MiB/s [2024-12-10T04:49:49.817Z] 11361.38 IOPS, 44.38 MiB/s [2024-12-10T04:49:49.817Z] 11379.00 IOPS, 44.45 MiB/s [2024-12-10T04:49:49.817Z] [2024-12-10 05:49:43.057147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.921 [2024-12-10 05:49:43.057551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.921 [2024-12-10 05:49:43.057557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.922 [2024-12-10 05:49:43.057578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.922 [2024-12-10 05:49:43.057593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.922 [2024-12-10 05:49:43.057608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.922 [2024-12-10 05:49:43.057623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.922 [2024-12-10 05:49:43.057638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.922 [2024-12-10 05:49:43.057653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.922 [2024-12-10 05:49:43.057667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.922 [2024-12-10 05:49:43.057682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.922 [2024-12-10 05:49:43.057697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.922 [2024-12-10 05:49:43.057711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.922 [2024-12-10 05:49:43.057726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.057990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.057998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.058013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.058027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.058042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.058056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.058070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.058084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.058098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.058112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.058128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.058143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.922 [2024-12-10 05:49:43.058158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.922 [2024-12-10 05:49:43.058164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.923 [2024-12-10 05:49:43.058548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.923 [2024-12-10 05:49:43.058766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.923 [2024-12-10 05:49:43.058773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.058989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.058998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.059005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.059015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.059022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.059031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.924 [2024-12-10 05:49:43.059038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.059060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.924 [2024-12-10 05:49:43.059067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89080 len:8 PRP1 0x0 PRP2 0x0 00:25:01.924 [2024-12-10 05:49:43.059074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.059083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.924 [2024-12-10 05:49:43.059089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.924 [2024-12-10 05:49:43.059097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89088 len:8 PRP1 0x0 PRP2 0x0 00:25:01.924 [2024-12-10 05:49:43.059103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.059110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.924 [2024-12-10 05:49:43.059115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.924 [2024-12-10 05:49:43.059121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89096 len:8 PRP1 0x0 PRP2 0x0 00:25:01.924 [2024-12-10 05:49:43.059127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.059134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.924 [2024-12-10 05:49:43.059139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.924 [2024-12-10 05:49:43.059144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89104 len:8 PRP1 0x0 PRP2 0x0 00:25:01.924 [2024-12-10 05:49:43.059150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.059198] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:01.924 [2024-12-10 05:49:43.059219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.924 [2024-12-10 05:49:43.059227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.059235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.924 [2024-12-10 05:49:43.059241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.059249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.924 [2024-12-10 05:49:43.059255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.059262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.924 [2024-12-10 05:49:43.059268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.924 [2024-12-10 05:49:43.059275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:01.924 [2024-12-10 05:49:43.059299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202a5d0 (9): Bad file descriptor 00:25:01.924 [2024-12-10 05:49:43.062046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:01.924 [2024-12-10 05:49:43.206476] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:01.924 11225.20 IOPS, 43.85 MiB/s [2024-12-10T04:49:49.820Z] 11262.27 IOPS, 43.99 MiB/s [2024-12-10T04:49:49.820Z] 11289.58 IOPS, 44.10 MiB/s [2024-12-10T04:49:49.820Z] 11301.77 IOPS, 44.15 MiB/s [2024-12-10T04:49:49.820Z] 11315.79 IOPS, 44.20 MiB/s 00:25:01.924 Latency(us) 00:25:01.924 [2024-12-10T04:49:49.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.924 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:01.924 Verification LBA range: start 0x0 length 0x4000 00:25:01.924 NVMe0n1 : 15.00 11339.06 44.29 794.24 0.00 10527.81 417.40 21346.01 00:25:01.924 [2024-12-10T04:49:49.820Z] =================================================================================================================== 00:25:01.924 [2024-12-10T04:49:49.820Z] Total : 11339.06 44.29 794.24 0.00 10527.81 417.40 21346.01 00:25:01.924 Received shutdown signal, test time was about 15.000000 seconds 00:25:01.924 00:25:01.924 Latency(us) 00:25:01.924 [2024-12-10T04:49:49.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.924 [2024-12-10T04:49:49.820Z] =================================================================================================================== 00:25:01.924 [2024-12-10T04:49:49.820Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1290955 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1290955 /var/tmp/bdevperf.sock 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1290955 ']' 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:01.924 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.925 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:01.925 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:01.925 [2024-12-10 05:49:49.570875] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:01.925 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:01.925 [2024-12-10 05:49:49.763474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:01.925 05:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:02.492 NVMe0n1 00:25:02.492 05:49:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:02.751 00:25:02.751 05:49:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:03.318 00:25:03.318 05:49:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:03.318 05:49:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:03.318 05:49:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.577 05:49:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:06.864 05:49:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:06.864 05:49:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:06.864 05:49:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.864 05:49:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1291810 00:25:06.864 05:49:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1291810 00:25:07.800 { 00:25:07.800 "results": [ 00:25:07.800 { 00:25:07.800 "job": "NVMe0n1", 00:25:07.800 "core_mask": "0x1", 00:25:07.800 "workload": "verify", 00:25:07.800 "status": "finished", 00:25:07.800 "verify_range": { 00:25:07.800 "start": 0, 00:25:07.800 "length": 16384 00:25:07.800 }, 00:25:07.800 "queue_depth": 128, 00:25:07.800 "io_size": 4096, 00:25:07.800 "runtime": 1.010824, 00:25:07.800 "iops": 11471.828923729552, 00:25:07.800 "mibps": 44.81183173331856, 00:25:07.800 "io_failed": 0, 00:25:07.800 "io_timeout": 0, 00:25:07.800 "avg_latency_us": 11096.647931799142, 00:25:07.800 "min_latency_us": 1685.2114285714285, 00:25:07.800 "max_latency_us": 10485.76 00:25:07.800 } 00:25:07.800 ], 00:25:07.800 "core_count": 1 00:25:07.800 } 00:25:07.800 05:49:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:07.800 [2024-12-10 05:49:49.186720] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:25:07.800 [2024-12-10 05:49:49.186773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290955 ] 00:25:07.800 [2024-12-10 05:49:49.264919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.800 [2024-12-10 05:49:49.301175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.800 [2024-12-10 05:49:51.306430] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:07.800 [2024-12-10 05:49:51.306477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.800 [2024-12-10 05:49:51.306488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.800 [2024-12-10 05:49:51.306497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.800 [2024-12-10 05:49:51.306504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.800 [2024-12-10 05:49:51.306512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.800 [2024-12-10 05:49:51.306519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.800 [2024-12-10 05:49:51.306526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.800 [2024-12-10 05:49:51.306532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.800 [2024-12-10 05:49:51.306539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:07.800 [2024-12-10 05:49:51.306564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:07.800 [2024-12-10 05:49:51.306578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21945d0 (9): Bad file descriptor 00:25:07.800 [2024-12-10 05:49:51.351133] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:07.800 Running I/O for 1 seconds... 00:25:07.800 11387.00 IOPS, 44.48 MiB/s 00:25:07.800 Latency(us) 00:25:07.800 [2024-12-10T04:49:55.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.800 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:07.800 Verification LBA range: start 0x0 length 0x4000 00:25:07.800 NVMe0n1 : 1.01 11471.83 44.81 0.00 0.00 11096.65 1685.21 10485.76 00:25:07.800 [2024-12-10T04:49:55.696Z] =================================================================================================================== 00:25:07.800 [2024-12-10T04:49:55.696Z] Total : 11471.83 44.81 0.00 0.00 11096.65 1685.21 10485.76 00:25:07.800 05:49:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:07.800 05:49:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:08.059 05:49:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.317 05:49:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:08.317 05:49:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:08.576 05:49:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.834 05:49:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1290955 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1290955 ']' 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1290955 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1290955 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1290955' 00:25:12.122 killing process with pid 1290955 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1290955 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1290955 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:12.122 05:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.382 rmmod nvme_tcp 00:25:12.382 rmmod nvme_fabrics 00:25:12.382 rmmod nvme_keyring 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1288102 ']' 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1288102 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1288102 ']' 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1288102 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1288102 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1288102' 00:25:12.382 killing process with pid 1288102 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1288102 00:25:12.382 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1288102 00:25:12.640 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:12.640 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:12.640 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:12.641 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:12.641 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:12.641 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:12.641 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:12.641 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:12.641 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:12.641 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.641 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.641 05:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.176 00:25:15.176 real 0m37.335s 00:25:15.176 user 1m58.593s 00:25:15.176 sys 0m7.845s 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:15.176 ************************************ 00:25:15.176 END TEST nvmf_failover 00:25:15.176 ************************************ 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.176 ************************************ 00:25:15.176 START TEST nvmf_host_discovery 00:25:15.176 ************************************ 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:15.176 * Looking for test storage... 00:25:15.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.176 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:15.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.177 --rc genhtml_branch_coverage=1 00:25:15.177 --rc genhtml_function_coverage=1 00:25:15.177 --rc genhtml_legend=1 00:25:15.177 --rc geninfo_all_blocks=1 00:25:15.177 --rc geninfo_unexecuted_blocks=1 00:25:15.177 00:25:15.177 ' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:15.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.177 --rc genhtml_branch_coverage=1 00:25:15.177 --rc genhtml_function_coverage=1 00:25:15.177 --rc genhtml_legend=1 00:25:15.177 --rc geninfo_all_blocks=1 00:25:15.177 --rc geninfo_unexecuted_blocks=1 00:25:15.177 00:25:15.177 ' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:15.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.177 --rc genhtml_branch_coverage=1 00:25:15.177 --rc genhtml_function_coverage=1 00:25:15.177 --rc genhtml_legend=1 00:25:15.177 --rc geninfo_all_blocks=1 00:25:15.177 --rc geninfo_unexecuted_blocks=1 00:25:15.177 00:25:15.177 ' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:15.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.177 --rc genhtml_branch_coverage=1 00:25:15.177 --rc genhtml_function_coverage=1 00:25:15.177 --rc genhtml_legend=1 00:25:15.177 --rc geninfo_all_blocks=1 00:25:15.177 --rc geninfo_unexecuted_blocks=1 00:25:15.177 00:25:15.177 ' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:15.177 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:15.178 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.178 05:50:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:20.452 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:20.710 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:20.711 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:20.711 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:20.711 Found net devices under 0000:af:00.0: cvl_0_0 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:20.711 Found net devices under 0000:af:00.1: cvl_0_1 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:20.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:25:20.711 00:25:20.711 --- 10.0.0.2 ping statistics --- 00:25:20.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.711 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:25:20.711 00:25:20.711 --- 10.0.0.1 ping statistics --- 00:25:20.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.711 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:20.711 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1296178 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1296178 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1296178 ']' 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.970 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.970 [2024-12-10 05:50:08.694368] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:25:20.970 [2024-12-10 05:50:08.694410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.970 [2024-12-10 05:50:08.771443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.970 [2024-12-10 05:50:08.811475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.970 [2024-12-10 05:50:08.811510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.970 [2024-12-10 05:50:08.811518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.970 [2024-12-10 05:50:08.811525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.970 [2024-12-10 05:50:08.811531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.970 [2024-12-10 05:50:08.812026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.229 [2024-12-10 05:50:08.955502] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.229 [2024-12-10 05:50:08.967687] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.229 null0 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.229 null1 00:25:21.229 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.230 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:21.230 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.230 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.230 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.230 05:50:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1296336 00:25:21.230 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:21.230 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1296336 /tmp/host.sock 00:25:21.230 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1296336 ']' 00:25:21.230 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:21.230 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.230 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:21.230 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:21.230 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.230 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.230 [2024-12-10 05:50:09.049912] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:25:21.230 [2024-12-10 05:50:09.049954] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296336 ] 00:25:21.230 [2024-12-10 05:50:09.105327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.489 [2024-12-10 05:50:09.145828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.489 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.749 [2024-12-10 05:50:09.569221] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.749 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:22.008 05:50:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:22.574 [2024-12-10 05:50:10.312297] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:22.574 [2024-12-10 05:50:10.312315] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:22.574 [2024-12-10 05:50:10.312327] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:22.574 [2024-12-10 05:50:10.398584] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:22.574 [2024-12-10 05:50:10.453097] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:22.574 [2024-12-10 05:50:10.453742] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x241afa0:1 started. 00:25:22.574 [2024-12-10 05:50:10.455119] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:22.574 [2024-12-10 05:50:10.455135] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:22.574 [2024-12-10 05:50:10.460492] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x241afa0 was disconnected and freed. delete nvme_qpair. 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:23.141 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.142 05:50:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:23.400 [2024-12-10 05:50:11.220922] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x241b320:1 started. 00:25:23.400 [2024-12-10 05:50:11.223244] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x241b320 was disconnected and freed. delete nvme_qpair. 00:25:23.400 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.400 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:23.400 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:23.400 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.401 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.660 [2024-12-10 05:50:11.306014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:23.660 [2024-12-10 05:50:11.306180] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:23.660 [2024-12-10 05:50:11.306198] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.660 [2024-12-10 05:50:11.394779] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:23.660 05:50:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:23.660 [2024-12-10 05:50:11.497445] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:23.660 [2024-12-10 05:50:11.497477] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:23.660 [2024-12-10 05:50:11.497484] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:23.660 [2024-12-10 05:50:11.497493] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:24.597 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:24.597 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:24.597 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:24.597 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:24.597 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:24.597 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.597 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:24.597 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.597 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:24.597 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.857 [2024-12-10 05:50:12.562416] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:24.857 [2024-12-10 05:50:12.562437] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:24.857 [2024-12-10 05:50:12.565229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.857 [2024-12-10 05:50:12.565245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.857 [2024-12-10 05:50:12.565253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.857 [2024-12-10 05:50:12.565259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.857 [2024-12-10 05:50:12.565266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.857 [2024-12-10 05:50:12.565272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.857 [2024-12-10 05:50:12.565279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.857 [2024-12-10 05:50:12.565285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.857 [2024-12-10 05:50:12.565292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb410 is same with the state(6) to be set 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.857 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:24.857 [2024-12-10 05:50:12.575242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eb410 (9): Bad file descriptor 00:25:24.857 [2024-12-10 05:50:12.585277] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:24.857 [2024-12-10 05:50:12.585294] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:24.857 [2024-12-10 05:50:12.585300] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:24.857 [2024-12-10 05:50:12.585305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:24.857 [2024-12-10 05:50:12.585321] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:24.857 [2024-12-10 05:50:12.585498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.857 [2024-12-10 05:50:12.585512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23eb410 with addr=10.0.0.2, port=4420 00:25:24.857 [2024-12-10 05:50:12.585520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb410 is same with the state(6) to be set 00:25:24.858 [2024-12-10 05:50:12.585535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eb410 (9): Bad file descriptor 00:25:24.858 [2024-12-10 05:50:12.585550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:24.858 [2024-12-10 05:50:12.585557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:24.858 [2024-12-10 05:50:12.585564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:24.858 [2024-12-10 05:50:12.585570] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:24.858 [2024-12-10 05:50:12.585575] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:24.858 [2024-12-10 05:50:12.585579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.858 [2024-12-10 05:50:12.595351] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:24.858 [2024-12-10 05:50:12.595362] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:24.858 [2024-12-10 05:50:12.595366] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:24.858 [2024-12-10 05:50:12.595370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:24.858 [2024-12-10 05:50:12.595384] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:24.858 [2024-12-10 05:50:12.595630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.858 [2024-12-10 05:50:12.595643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23eb410 with addr=10.0.0.2, port=4420 00:25:24.858 [2024-12-10 05:50:12.595650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb410 is same with the state(6) to be set 00:25:24.858 [2024-12-10 05:50:12.595662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eb410 (9): Bad file descriptor 00:25:24.858 [2024-12-10 05:50:12.595678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:24.858 [2024-12-10 05:50:12.595684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:24.858 [2024-12-10 05:50:12.595691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:24.858 [2024-12-10 05:50:12.595697] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:24.858 [2024-12-10 05:50:12.595702] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:24.858 [2024-12-10 05:50:12.595705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:24.858 [2024-12-10 05:50:12.605415] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:24.858 [2024-12-10 05:50:12.605426] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:24.858 [2024-12-10 05:50:12.605429] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:24.858 [2024-12-10 05:50:12.605433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:24.858 [2024-12-10 05:50:12.605447] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:24.858 [2024-12-10 05:50:12.605681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.858 [2024-12-10 05:50:12.605694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23eb410 with addr=10.0.0.2, port=4420 00:25:24.858 [2024-12-10 05:50:12.605705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb410 is same with the state(6) to be set 00:25:24.858 [2024-12-10 05:50:12.605715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eb410 (9): Bad file descriptor 00:25:24.858 [2024-12-10 05:50:12.605744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:24.858 [2024-12-10 05:50:12.605751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:24.858 [2024-12-10 05:50:12.605758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:24.858 [2024-12-10 05:50:12.605763] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:24.858 [2024-12-10 05:50:12.605768] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:24.858 [2024-12-10 05:50:12.605771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:24.858 [2024-12-10 05:50:12.615477] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:24.858 [2024-12-10 05:50:12.615491] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:24.858 [2024-12-10 05:50:12.615495] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:24.858 [2024-12-10 05:50:12.615499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:24.858 [2024-12-10 05:50:12.615512] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:24.858 [2024-12-10 05:50:12.615784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.858 [2024-12-10 05:50:12.615797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23eb410 with addr=10.0.0.2, port=4420 00:25:24.858 [2024-12-10 05:50:12.615804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb410 is same with the state(6) to be set 00:25:24.858 [2024-12-10 05:50:12.615815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eb410 (9): Bad file descriptor 00:25:24.858 [2024-12-10 05:50:12.615830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:24.858 [2024-12-10 05:50:12.615837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:24.858 [2024-12-10 05:50:12.615843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:24.858 [2024-12-10 05:50:12.615849] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:24.858 [2024-12-10 05:50:12.615853] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:24.858 [2024-12-10 05:50:12.615863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.858 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:24.858 [2024-12-10 05:50:12.625542] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:24.858 [2024-12-10 05:50:12.625555] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:24.858 [2024-12-10 05:50:12.625560] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:24.858 [2024-12-10 05:50:12.625563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:24.858 [2024-12-10 05:50:12.625577] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:24.858 [2024-12-10 05:50:12.625801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.858 [2024-12-10 05:50:12.625814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23eb410 with addr=10.0.0.2, port=4420 00:25:24.858 [2024-12-10 05:50:12.625821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb410 is same with the state(6) to be set 00:25:24.858 [2024-12-10 05:50:12.625832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eb410 (9): Bad file descriptor 00:25:24.858 [2024-12-10 05:50:12.625856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:24.858 [2024-12-10 05:50:12.625863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:24.858 [2024-12-10 05:50:12.625871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:24.858 [2024-12-10 05:50:12.625876] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:24.858 [2024-12-10 05:50:12.625881] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:24.858 [2024-12-10 05:50:12.625884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:24.858 [2024-12-10 05:50:12.635607] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:24.858 [2024-12-10 05:50:12.635617] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:24.858 [2024-12-10 05:50:12.635621] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:24.858 [2024-12-10 05:50:12.635625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:24.858 [2024-12-10 05:50:12.635638] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:24.858 [2024-12-10 05:50:12.635872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.858 [2024-12-10 05:50:12.635883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23eb410 with addr=10.0.0.2, port=4420 00:25:24.858 [2024-12-10 05:50:12.635891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb410 is same with the state(6) to be set 00:25:24.858 [2024-12-10 05:50:12.635901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eb410 (9): Bad file descriptor 00:25:24.858 [2024-12-10 05:50:12.635915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:24.858 [2024-12-10 05:50:12.635921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:24.859 [2024-12-10 05:50:12.635927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:24.859 [2024-12-10 05:50:12.635933] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:24.859 [2024-12-10 05:50:12.635937] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:24.859 [2024-12-10 05:50:12.635941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:24.859 [2024-12-10 05:50:12.645669] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:24.859 [2024-12-10 05:50:12.645679] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:24.859 [2024-12-10 05:50:12.645682] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:24.859 [2024-12-10 05:50:12.645686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:24.859 [2024-12-10 05:50:12.645699] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:24.859 [2024-12-10 05:50:12.645864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.859 [2024-12-10 05:50:12.645876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23eb410 with addr=10.0.0.2, port=4420 00:25:24.859 [2024-12-10 05:50:12.645883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb410 is same with the state(6) to be set 00:25:24.859 [2024-12-10 05:50:12.645893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eb410 (9): Bad file descriptor 00:25:24.859 [2024-12-10 05:50:12.645907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:24.859 [2024-12-10 05:50:12.645913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:24.859 [2024-12-10 05:50:12.645920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:24.859 [2024-12-10 05:50:12.645925] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:24.859 [2024-12-10 05:50:12.645929] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:24.859 [2024-12-10 05:50:12.645933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:24.859 [2024-12-10 05:50:12.648479] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:24.859 [2024-12-10 05:50:12.648493] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.859 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.118 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.118 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:25.118 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:25.118 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:25.118 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.119 05:50:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.055 [2024-12-10 05:50:13.920795] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:26.055 [2024-12-10 05:50:13.920812] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:26.055 [2024-12-10 05:50:13.920822] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:26.314 [2024-12-10 05:50:14.009083] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:26.314 [2024-12-10 05:50:14.074604] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:26.314 [2024-12-10 05:50:14.075163] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x23e8d00:1 started. 00:25:26.314 [2024-12-10 05:50:14.076643] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:26.314 [2024-12-10 05:50:14.076667] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:26.314 [2024-12-10 05:50:14.080112] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x23e8d00 was disconnected and freed. delete nvme_qpair. 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.314 request: 00:25:26.314 { 00:25:26.314 "name": "nvme", 00:25:26.314 "trtype": "tcp", 00:25:26.314 "traddr": "10.0.0.2", 00:25:26.314 "adrfam": "ipv4", 00:25:26.314 "trsvcid": "8009", 00:25:26.314 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:26.314 "wait_for_attach": true, 00:25:26.314 "method": "bdev_nvme_start_discovery", 00:25:26.314 "req_id": 1 00:25:26.314 } 00:25:26.314 Got JSON-RPC error response 00:25:26.314 response: 00:25:26.314 { 00:25:26.314 "code": -17, 00:25:26.314 "message": "File exists" 00:25:26.314 } 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:26.314 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:26.315 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:26.315 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:26.315 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.315 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.573 request: 00:25:26.573 { 00:25:26.573 "name": "nvme_second", 00:25:26.573 "trtype": "tcp", 00:25:26.573 "traddr": "10.0.0.2", 00:25:26.573 "adrfam": "ipv4", 00:25:26.573 "trsvcid": "8009", 00:25:26.573 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:26.573 "wait_for_attach": true, 00:25:26.573 "method": "bdev_nvme_start_discovery", 00:25:26.573 "req_id": 1 00:25:26.573 } 00:25:26.573 Got JSON-RPC error response 00:25:26.573 response: 00:25:26.573 { 00:25:26.573 "code": -17, 00:25:26.573 "message": "File exists" 00:25:26.573 } 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.573 05:50:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.509 [2024-12-10 05:50:15.316013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.509 [2024-12-10 05:50:15.316041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a5a0 with addr=10.0.0.2, port=8010 00:25:27.509 [2024-12-10 05:50:15.316056] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:27.509 [2024-12-10 05:50:15.316067] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:27.509 [2024-12-10 05:50:15.316074] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:28.447 [2024-12-10 05:50:16.318450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.447 [2024-12-10 05:50:16.318474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2401e50 with addr=10.0.0.2, port=8010 00:25:28.447 [2024-12-10 05:50:16.318486] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:28.447 [2024-12-10 05:50:16.318492] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:28.447 [2024-12-10 05:50:16.318498] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:29.826 [2024-12-10 05:50:17.320682] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:29.826 request: 00:25:29.826 { 00:25:29.826 "name": "nvme_second", 00:25:29.826 "trtype": "tcp", 00:25:29.826 "traddr": "10.0.0.2", 00:25:29.826 "adrfam": "ipv4", 00:25:29.826 "trsvcid": "8010", 00:25:29.826 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:29.826 "wait_for_attach": false, 00:25:29.826 "attach_timeout_ms": 3000, 00:25:29.826 "method": "bdev_nvme_start_discovery", 00:25:29.826 "req_id": 1 00:25:29.826 } 00:25:29.826 Got JSON-RPC error response 00:25:29.826 response: 00:25:29.826 { 00:25:29.826 "code": -110, 00:25:29.826 "message": "Connection timed out" 00:25:29.826 } 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1296336 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:29.826 rmmod nvme_tcp 00:25:29.826 rmmod nvme_fabrics 00:25:29.826 rmmod nvme_keyring 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:29.826 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1296178 ']' 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1296178 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1296178 ']' 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1296178 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1296178 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1296178' 00:25:29.827 killing process with pid 1296178 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1296178 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1296178 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.827 05:50:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:32.363 00:25:32.363 real 0m17.183s 00:25:32.363 user 0m20.504s 00:25:32.363 sys 0m5.829s 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.363 ************************************ 00:25:32.363 END TEST nvmf_host_discovery 00:25:32.363 ************************************ 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.363 ************************************ 00:25:32.363 START TEST nvmf_host_multipath_status 00:25:32.363 ************************************ 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:32.363 * Looking for test storage... 00:25:32.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:32.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.363 --rc genhtml_branch_coverage=1 00:25:32.363 --rc genhtml_function_coverage=1 00:25:32.363 --rc genhtml_legend=1 00:25:32.363 --rc geninfo_all_blocks=1 00:25:32.363 --rc geninfo_unexecuted_blocks=1 00:25:32.363 00:25:32.363 ' 00:25:32.363 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:32.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.363 --rc genhtml_branch_coverage=1 00:25:32.363 --rc genhtml_function_coverage=1 00:25:32.363 --rc genhtml_legend=1 00:25:32.363 --rc geninfo_all_blocks=1 00:25:32.363 --rc geninfo_unexecuted_blocks=1 00:25:32.363 00:25:32.363 ' 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:32.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.364 --rc genhtml_branch_coverage=1 00:25:32.364 --rc genhtml_function_coverage=1 00:25:32.364 --rc genhtml_legend=1 00:25:32.364 --rc geninfo_all_blocks=1 00:25:32.364 --rc geninfo_unexecuted_blocks=1 00:25:32.364 00:25:32.364 ' 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:32.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.364 --rc genhtml_branch_coverage=1 00:25:32.364 --rc genhtml_function_coverage=1 00:25:32.364 --rc genhtml_legend=1 00:25:32.364 --rc geninfo_all_blocks=1 00:25:32.364 --rc geninfo_unexecuted_blocks=1 00:25:32.364 00:25:32.364 ' 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.364 05:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:32.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:32.364 05:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:37.758 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:37.758 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:37.758 Found net devices under 0000:af:00.0: cvl_0_0 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:37.758 Found net devices under 0000:af:00.1: cvl_0_1 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.758 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.759 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.759 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.759 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.759 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.759 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:38.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:25:38.018 00:25:38.018 --- 10.0.0.2 ping statistics --- 00:25:38.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.018 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:25:38.018 00:25:38.018 --- 10.0.0.1 ping statistics --- 00:25:38.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.018 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:38.018 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1301361 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1301361 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1301361 ']' 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.277 05:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:38.277 [2024-12-10 05:50:26.002397] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:25:38.277 [2024-12-10 05:50:26.002443] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.277 [2024-12-10 05:50:26.080603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:38.277 [2024-12-10 05:50:26.118437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.277 [2024-12-10 05:50:26.118475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.277 [2024-12-10 05:50:26.118483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.277 [2024-12-10 05:50:26.118489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.277 [2024-12-10 05:50:26.118493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.277 [2024-12-10 05:50:26.119594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.277 [2024-12-10 05:50:26.119595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.535 05:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.535 05:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:38.535 05:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:38.535 05:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.535 05:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:38.535 05:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.535 05:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1301361 00:25:38.535 05:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:38.535 [2024-12-10 05:50:26.424151] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.793 05:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:38.793 Malloc0 00:25:38.793 05:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:39.051 05:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:39.308 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.565 [2024-12-10 05:50:27.215857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.565 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:39.565 [2024-12-10 05:50:27.404331] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:39.565 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:39.565 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1301638 00:25:39.565 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:39.565 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1301638 /var/tmp/bdevperf.sock 00:25:39.565 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1301638 ']' 00:25:39.565 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.565 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.565 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.565 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.565 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:39.823 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:39.823 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:39.823 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:40.080 05:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:40.645 Nvme0n1 00:25:40.645 05:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:40.901 Nvme0n1 00:25:40.901 05:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:40.901 05:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:43.428 05:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:43.428 05:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:43.428 05:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:43.428 05:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:44.361 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:44.361 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:44.361 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.361 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.619 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.619 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:44.619 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.619 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:44.878 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.878 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:44.878 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.878 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.135 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.135 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.135 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.135 05:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.135 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.135 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:45.135 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.135 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.393 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.393 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.393 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.393 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.651 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.651 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:45.651 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:45.909 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:46.166 05:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:47.100 05:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:47.100 05:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:47.100 05:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.100 05:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.357 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.358 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:47.358 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.358 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.615 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.615 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.615 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.615 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.872 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.872 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.872 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.872 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.872 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.872 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.873 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.873 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.130 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.130 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:48.130 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.130 05:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.388 05:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.388 05:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:48.388 05:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:48.646 05:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:48.903 05:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:49.835 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:49.835 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:49.835 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.835 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:50.092 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.092 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:50.092 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.092 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.092 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.093 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.093 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.093 05:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.350 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.350 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.350 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.350 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.608 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.608 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.608 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.608 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.865 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.866 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:50.866 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.866 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.123 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.123 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:51.123 05:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:51.381 05:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:51.381 05:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:52.754 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:52.754 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:52.754 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.754 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:52.754 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.754 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:52.754 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.754 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.012 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.012 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.012 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.012 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.012 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.012 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.012 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.012 05:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.270 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.270 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:53.270 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.270 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.527 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.527 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:53.527 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.527 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.785 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.785 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:53.785 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:54.043 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:54.043 05:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:55.415 05:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:55.415 05:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:55.415 05:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.415 05:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.415 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.415 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:55.415 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.415 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.415 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.415 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.415 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.415 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.673 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.673 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.673 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.673 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:55.930 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.930 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:55.930 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.930 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.188 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.188 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:56.188 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.188 05:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.188 05:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.188 05:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:56.188 05:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:56.446 05:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:56.704 05:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:57.638 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:57.638 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:57.638 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.638 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:57.896 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.896 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:57.896 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.896 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.154 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.155 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.155 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.155 05:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.412 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.412 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.412 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.412 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.412 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.412 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:58.412 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.412 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.670 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.670 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:58.670 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.670 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.928 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.928 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:59.186 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:59.186 05:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:59.444 05:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:59.702 05:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:00.635 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:00.635 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:00.635 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.635 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.892 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.892 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:00.892 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.892 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.149 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.149 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:01.150 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.150 05:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.150 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.150 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.150 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.150 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.408 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.408 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.408 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.408 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.666 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.666 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:01.666 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.666 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.924 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.924 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:01.924 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:02.181 05:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:02.439 05:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:03.373 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:03.373 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:03.373 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.373 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.631 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.631 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:03.631 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.631 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.631 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.631 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.889 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.889 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.889 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.889 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.889 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.889 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.147 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.147 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:04.147 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.147 05:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:04.405 05:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.405 05:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:04.405 05:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.405 05:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.662 05:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.662 05:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:04.662 05:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:04.662 05:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:04.920 05:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:06.292 05:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:06.292 05:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:06.292 05:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.292 05:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.292 05:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.292 05:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:06.292 05:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.292 05:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:06.550 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.550 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:06.550 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.550 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:06.550 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.550 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.550 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.550 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.807 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.807 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:06.808 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.808 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.065 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.065 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:07.065 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.065 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.323 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.323 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:07.323 05:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:07.323 05:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:07.580 05:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:08.513 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:08.513 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:08.770 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.770 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.770 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.770 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:08.770 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.770 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.028 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.028 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.028 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.028 05:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:09.286 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.286 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:09.286 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.286 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:09.543 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.543 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:09.543 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:09.543 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.801 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.801 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:09.801 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.801 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:09.801 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.801 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1301638 00:26:09.801 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1301638 ']' 00:26:09.801 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1301638 00:26:09.801 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:09.801 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.801 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1301638 00:26:10.080 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:10.080 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:10.080 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1301638' 00:26:10.080 killing process with pid 1301638 00:26:10.080 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1301638 00:26:10.080 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1301638 00:26:10.080 { 00:26:10.080 "results": [ 00:26:10.080 { 00:26:10.080 "job": "Nvme0n1", 00:26:10.080 "core_mask": "0x4", 00:26:10.080 "workload": "verify", 00:26:10.080 "status": "terminated", 00:26:10.080 "verify_range": { 00:26:10.080 "start": 0, 00:26:10.080 "length": 16384 00:26:10.080 }, 00:26:10.080 "queue_depth": 128, 00:26:10.080 "io_size": 4096, 00:26:10.080 "runtime": 28.833471, 00:26:10.080 "iops": 10766.272295139215, 00:26:10.080 "mibps": 42.05575115288756, 00:26:10.080 "io_failed": 0, 00:26:10.080 "io_timeout": 0, 00:26:10.080 "avg_latency_us": 11869.071872390421, 00:26:10.080 "min_latency_us": 222.3542857142857, 00:26:10.080 "max_latency_us": 3083812.083809524 00:26:10.080 } 00:26:10.080 ], 00:26:10.080 "core_count": 1 00:26:10.080 } 00:26:10.080 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1301638 00:26:10.080 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:10.080 [2024-12-10 05:50:27.448269] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:26:10.080 [2024-12-10 05:50:27.448320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301638 ] 00:26:10.080 [2024-12-10 05:50:27.524590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.080 [2024-12-10 05:50:27.563564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:10.080 Running I/O for 90 seconds... 00:26:10.080 11586.00 IOPS, 45.26 MiB/s [2024-12-10T04:50:57.976Z] 11545.00 IOPS, 45.10 MiB/s [2024-12-10T04:50:57.976Z] 11666.67 IOPS, 45.57 MiB/s [2024-12-10T04:50:57.976Z] 11669.25 IOPS, 45.58 MiB/s [2024-12-10T04:50:57.976Z] 11650.60 IOPS, 45.51 MiB/s [2024-12-10T04:50:57.976Z] 11615.17 IOPS, 45.37 MiB/s [2024-12-10T04:50:57.976Z] 11612.14 IOPS, 45.36 MiB/s [2024-12-10T04:50:57.976Z] 11632.62 IOPS, 45.44 MiB/s [2024-12-10T04:50:57.976Z] 11648.67 IOPS, 45.50 MiB/s [2024-12-10T04:50:57.976Z] 11669.40 IOPS, 45.58 MiB/s [2024-12-10T04:50:57.976Z] 11677.45 IOPS, 45.62 MiB/s [2024-12-10T04:50:57.976Z] 11651.75 IOPS, 45.51 MiB/s [2024-12-10T04:50:57.976Z] [2024-12-10 05:50:41.661699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.661979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.661991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.662000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.662012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.662020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.662032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.662040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.662052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.662060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.662221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.662232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.080 [2024-12-10 05:50:41.662245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.080 [2024-12-10 05:50:41.662252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.081 [2024-12-10 05:50:41.662982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.081 [2024-12-10 05:50:41.662989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.082 [2024-12-10 05:50:41.663008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.082 [2024-12-10 05:50:41.663027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.082 [2024-12-10 05:50:41.663045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.082 [2024-12-10 05:50:41.663063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.663986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.663999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.664005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.664018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.664025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.664037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.664044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.664056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.664064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.664077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.664090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.664103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.664110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.664122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.664130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.664146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.082 [2024-12-10 05:50:41.664153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.664172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.082 [2024-12-10 05:50:41.664179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.082 [2024-12-10 05:50:41.664192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.083 [2024-12-10 05:50:41.664778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.083 [2024-12-10 05:50:41.664799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.083 [2024-12-10 05:50:41.664818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.083 [2024-12-10 05:50:41.664837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.083 [2024-12-10 05:50:41.664856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.083 [2024-12-10 05:50:41.664876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.083 [2024-12-10 05:50:41.664895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.664907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.083 [2024-12-10 05:50:41.664915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.665418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.083 [2024-12-10 05:50:41.665431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.083 [2024-12-10 05:50:41.665445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.665986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.665993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.666005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.666013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.666025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.666031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.666043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.666050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.678816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.678828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.084 [2024-12-10 05:50:41.678842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.084 [2024-12-10 05:50:41.678849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.678861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.678868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.678880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.678887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.678900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.678909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.678922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.678929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.678941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.678949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.678962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.678969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.678981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.678990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.085 [2024-12-10 05:50:41.679252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.679272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.679809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.679832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.679855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.679875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.679894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.679915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.679935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.679967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.679986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.679996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.680012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.680023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.680040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.680050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.680067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.680077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.680093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.680103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.680120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.680130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.680149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.680160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.680185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.680195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.680212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.680222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.680239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.085 [2024-12-10 05:50:41.680250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.085 [2024-12-10 05:50:41.680267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.086 [2024-12-10 05:50:41.680647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.680979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.680989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.086 [2024-12-10 05:50:41.681311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.086 [2024-12-10 05:50:41.681327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-12-10 05:50:41.681337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-12-10 05:50:41.681364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-12-10 05:50:41.681389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-12-10 05:50:41.681415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-12-10 05:50:41.681442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-12-10 05:50:41.681470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.087 [2024-12-10 05:50:41.681496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.681523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.681549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.681575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.681602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.681627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.681644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.681653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.682986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.682995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.683014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.087 [2024-12-10 05:50:41.683022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.087 [2024-12-10 05:50:41.683039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.683775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.088 [2024-12-10 05:50:41.683784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.684438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.088 [2024-12-10 05:50:41.684458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.684477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.088 [2024-12-10 05:50:41.684487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.684505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.088 [2024-12-10 05:50:41.684515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.684532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.088 [2024-12-10 05:50:41.684542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.684559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.088 [2024-12-10 05:50:41.684569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.684585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.088 [2024-12-10 05:50:41.684595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.684612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.088 [2024-12-10 05:50:41.684622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.684639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.088 [2024-12-10 05:50:41.684648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.684665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.088 [2024-12-10 05:50:41.684675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.088 [2024-12-10 05:50:41.684693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.684706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.684723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.684733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.684750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.684761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.684778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.684788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.684805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.684815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.684832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.684842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.684859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.684869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.684886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.684896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.684913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.684923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.684939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.684949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.684966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.684976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.684992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.089 [2024-12-10 05:50:41.685355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.685620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.685630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.690941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.690956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.089 [2024-12-10 05:50:41.690977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.089 [2024-12-10 05:50:41.690989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.090 [2024-12-10 05:50:41.691603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.691636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.691666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.691696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.691725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.691745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.691757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.692977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.090 [2024-12-10 05:50:41.692999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.090 [2024-12-10 05:50:41.693012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.693972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.693983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.694002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.694014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.694034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.694045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.694065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.694076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.694099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.694110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.694129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.694140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.694160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.694180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.694201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.694215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.694235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.694247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.091 [2024-12-10 05:50:41.694267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.091 [2024-12-10 05:50:41.694278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.694297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.092 [2024-12-10 05:50:41.694308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.092 [2024-12-10 05:50:41.695050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.695974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.695993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.696004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.696024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.696035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.696054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.696065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.696084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.696096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.696115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.092 [2024-12-10 05:50:41.696127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.696147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.696158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.696182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.696194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.696212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.092 [2024-12-10 05:50:41.696224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.092 [2024-12-10 05:50:41.696243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.696979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.696990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.697008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.697019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.697039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.697050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.697068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.697079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.697101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.093 [2024-12-10 05:50:41.697114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.697133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.093 [2024-12-10 05:50:41.697144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.697163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.093 [2024-12-10 05:50:41.697180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.697199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.093 [2024-12-10 05:50:41.697210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.697229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.093 [2024-12-10 05:50:41.697241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.697996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.093 [2024-12-10 05:50:41.698012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.698033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.093 [2024-12-10 05:50:41.698045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.093 [2024-12-10 05:50:41.698065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.093 [2024-12-10 05:50:41.698076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.698972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.698993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.699004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.699022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.699034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.699053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.699065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.699085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.699096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.699115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.699127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.699146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.699157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.699181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.699193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.699211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.699223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.699242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.699253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.699272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.699283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.094 [2024-12-10 05:50:41.699303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.094 [2024-12-10 05:50:41.699314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.699699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.699710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.700435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.095 [2024-12-10 05:50:41.700463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.700979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.700993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.701003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.701019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.701028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.701045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.701054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.095 [2024-12-10 05:50:41.701069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.095 [2024-12-10 05:50:41.701079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.096 [2024-12-10 05:50:41.701305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.701977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.701986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.702001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.702009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.702024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.702033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.096 [2024-12-10 05:50:41.702048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.096 [2024-12-10 05:50:41.702057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.097 [2024-12-10 05:50:41.702083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.702982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.702991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.097 [2024-12-10 05:50:41.703547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.097 [2024-12-10 05:50:41.703561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.703978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.703994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.704004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.704029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.704055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.704080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.704106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.704131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.704155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.704752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.704779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.098 [2024-12-10 05:50:41.704804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.704828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.704854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.704878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.704903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.704926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.704954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.704979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.704993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.705002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.705017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.705026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.705042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.705051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.705067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.705077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.098 [2024-12-10 05:50:41.705092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.098 [2024-12-10 05:50:41.705101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.099 [2024-12-10 05:50:41.705653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.705982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.705990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.706005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.706013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.706036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.706045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.706061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.706070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.099 [2024-12-10 05:50:41.706097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.099 [2024-12-10 05:50:41.706106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.100 [2024-12-10 05:50:41.706452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.706477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.706492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.706501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.100 [2024-12-10 05:50:41.707681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.100 [2024-12-10 05:50:41.707689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.707979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.707988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.708492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.708501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.709070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.709085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.709103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.709112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.709127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.709136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.709151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.101 [2024-12-10 05:50:41.709165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.709187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.101 [2024-12-10 05:50:41.709196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.709212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.101 [2024-12-10 05:50:41.709222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.101 [2024-12-10 05:50:41.709238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.101 [2024-12-10 05:50:41.709247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.709982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.709998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.102 [2024-12-10 05:50:41.710007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.710022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.710031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.710046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.710056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.710073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.710083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.710099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.710108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.710123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.102 [2024-12-10 05:50:41.710133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.102 [2024-12-10 05:50:41.710148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.103 [2024-12-10 05:50:41.710741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.710755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.710761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.103 [2024-12-10 05:50:41.711496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.103 [2024-12-10 05:50:41.711503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.711981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.711990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.104 [2024-12-10 05:50:41.712351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.104 [2024-12-10 05:50:41.712359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.712372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.105 [2024-12-10 05:50:41.712380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.712392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.105 [2024-12-10 05:50:41.712400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.712413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.105 [2024-12-10 05:50:41.712423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.712436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.105 [2024-12-10 05:50:41.712444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.712965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.105 [2024-12-10 05:50:41.712978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.712992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.105 [2024-12-10 05:50:41.713000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.105 [2024-12-10 05:50:41.713020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.105 [2024-12-10 05:50:41.713040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.105 [2024-12-10 05:50:41.713060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.105 [2024-12-10 05:50:41.713687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.105 [2024-12-10 05:50:41.713700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.106 [2024-12-10 05:50:41.713773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.713988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.713997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.714366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.714373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.718281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.718292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.718307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.718323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.106 [2024-12-10 05:50:41.718336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.106 [2024-12-10 05:50:41.718344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.718841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.718854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.718870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.718878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.718891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.718899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.718912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.718919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.718932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.718939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.718952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.718961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.718973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.718980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.718994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.107 [2024-12-10 05:50:41.719646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.107 [2024-12-10 05:50:41.719658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.719979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.719986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.720481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.720503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.720523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.720544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.720565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.108 [2024-12-10 05:50:41.720589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.108 [2024-12-10 05:50:41.720937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.108 [2024-12-10 05:50:41.720950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.720957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.720969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.720976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.720989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.720996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.109 [2024-12-10 05:50:41.721292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.109 [2024-12-10 05:50:41.721761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.109 [2024-12-10 05:50:41.721768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.721781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.110 [2024-12-10 05:50:41.721789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.721801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.110 [2024-12-10 05:50:41.721809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.721821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.110 [2024-12-10 05:50:41.721828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.721840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.110 [2024-12-10 05:50:41.721848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.721861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.110 [2024-12-10 05:50:41.721870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.721882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.110 [2024-12-10 05:50:41.721891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.721903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.110 [2024-12-10 05:50:41.721912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.721926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.110 [2024-12-10 05:50:41.721933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.110 [2024-12-10 05:50:41.722439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.110 [2024-12-10 05:50:41.722983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.110 [2024-12-10 05:50:41.722990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.111 [2024-12-10 05:50:41.723890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.111 [2024-12-10 05:50:41.723914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.111 [2024-12-10 05:50:41.723937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.111 [2024-12-10 05:50:41.723961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.723976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.111 [2024-12-10 05:50:41.723984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.111 [2024-12-10 05:50:41.724000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.112 [2024-12-10 05:50:41.724761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.724980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.724987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.725005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.112 [2024-12-10 05:50:41.725015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.112 [2024-12-10 05:50:41.725032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:41.725548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:41.725556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.113 11459.46 IOPS, 44.76 MiB/s [2024-12-10T04:50:58.009Z] 10640.93 IOPS, 41.57 MiB/s [2024-12-10T04:50:58.009Z] 9931.53 IOPS, 38.80 MiB/s [2024-12-10T04:50:58.009Z] 9416.19 IOPS, 36.78 MiB/s [2024-12-10T04:50:58.009Z] 9543.06 IOPS, 37.28 MiB/s [2024-12-10T04:50:58.009Z] 9660.06 IOPS, 37.73 MiB/s [2024-12-10T04:50:58.009Z] 9844.16 IOPS, 38.45 MiB/s [2024-12-10T04:50:58.009Z] 10042.00 IOPS, 39.23 MiB/s [2024-12-10T04:50:58.009Z] 10212.67 IOPS, 39.89 MiB/s [2024-12-10T04:50:58.009Z] 10271.91 IOPS, 40.12 MiB/s [2024-12-10T04:50:58.009Z] 10327.87 IOPS, 40.34 MiB/s [2024-12-10T04:50:58.009Z] 10384.79 IOPS, 40.57 MiB/s [2024-12-10T04:50:58.009Z] 10514.64 IOPS, 41.07 MiB/s [2024-12-10T04:50:58.009Z] 10637.88 IOPS, 41.55 MiB/s [2024-12-10T04:50:58.009Z] [2024-12-10 05:50:55.365720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.365756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.365789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.365803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.365816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.365823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.365836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.365843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.365855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.365863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.365875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.365883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.365896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.365903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.365916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.365923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.365935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.365943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.365956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.365963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.365976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:55.365982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.365994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:55.366002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.366016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:55.366024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.366037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.366047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.366059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-12-10 05:50:55.366066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.366081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.366089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.113 [2024-12-10 05:50:55.366102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.113 [2024-12-10 05:50:55.366109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.366123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.366131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.366144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.366151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.366960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.366977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.366992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.366999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.367426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.367446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.367465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.367483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.367495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.114 [2024-12-10 05:50:55.367502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.368147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.368161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.368182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.368189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.368203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.368210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.368222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.368232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.368244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.368251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.368279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.368287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.368300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.368307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.114 [2024-12-10 05:50:55.368320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-12-10 05:50:55.368326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.114 10705.15 IOPS, 41.82 MiB/s [2024-12-10T04:50:58.010Z] 10738.29 IOPS, 41.95 MiB/s [2024-12-10T04:50:58.010Z] Received shutdown signal, test time was about 28.834099 seconds 00:26:10.114 00:26:10.114 Latency(us) 00:26:10.114 [2024-12-10T04:50:58.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.114 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:10.114 Verification LBA range: start 0x0 length 0x4000 00:26:10.115 Nvme0n1 : 28.83 10766.27 42.06 0.00 0.00 11869.07 222.35 3083812.08 00:26:10.115 [2024-12-10T04:50:58.011Z] =================================================================================================================== 00:26:10.115 [2024-12-10T04:50:58.011Z] Total : 10766.27 42.06 0.00 0.00 11869.07 222.35 3083812.08 00:26:10.115 05:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:10.373 rmmod nvme_tcp 00:26:10.373 rmmod nvme_fabrics 00:26:10.373 rmmod nvme_keyring 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1301361 ']' 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1301361 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1301361 ']' 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1301361 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1301361 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1301361' 00:26:10.373 killing process with pid 1301361 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1301361 00:26:10.373 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1301361 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.632 05:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:13.167 00:26:13.167 real 0m40.671s 00:26:13.167 user 1m50.278s 00:26:13.167 sys 0m11.552s 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.167 ************************************ 00:26:13.167 END TEST nvmf_host_multipath_status 00:26:13.167 ************************************ 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.167 ************************************ 00:26:13.167 START TEST nvmf_discovery_remove_ifc 00:26:13.167 ************************************ 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:13.167 * Looking for test storage... 00:26:13.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:13.167 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.168 --rc genhtml_branch_coverage=1 00:26:13.168 --rc genhtml_function_coverage=1 00:26:13.168 --rc genhtml_legend=1 00:26:13.168 --rc geninfo_all_blocks=1 00:26:13.168 --rc geninfo_unexecuted_blocks=1 00:26:13.168 00:26:13.168 ' 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.168 --rc genhtml_branch_coverage=1 00:26:13.168 --rc genhtml_function_coverage=1 00:26:13.168 --rc genhtml_legend=1 00:26:13.168 --rc geninfo_all_blocks=1 00:26:13.168 --rc geninfo_unexecuted_blocks=1 00:26:13.168 00:26:13.168 ' 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.168 --rc genhtml_branch_coverage=1 00:26:13.168 --rc genhtml_function_coverage=1 00:26:13.168 --rc genhtml_legend=1 00:26:13.168 --rc geninfo_all_blocks=1 00:26:13.168 --rc geninfo_unexecuted_blocks=1 00:26:13.168 00:26:13.168 ' 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.168 --rc genhtml_branch_coverage=1 00:26:13.168 --rc genhtml_function_coverage=1 00:26:13.168 --rc genhtml_legend=1 00:26:13.168 --rc geninfo_all_blocks=1 00:26:13.168 --rc geninfo_unexecuted_blocks=1 00:26:13.168 00:26:13.168 ' 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:13.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:13.168 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:13.169 05:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:18.502 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:18.502 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:18.502 Found net devices under 0000:af:00.0: cvl_0_0 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:18.502 Found net devices under 0000:af:00.1: cvl_0_1 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:18.502 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:18.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:26:18.850 00:26:18.850 --- 10.0.0.2 ping statistics --- 00:26:18.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.850 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:26:18.850 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:26:18.850 00:26:18.850 --- 10.0.0.1 ping statistics --- 00:26:18.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.851 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1310486 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1310486 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1310486 ']' 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.851 05:51:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.851 [2024-12-10 05:51:06.700932] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:26:18.851 [2024-12-10 05:51:06.700980] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.109 [2024-12-10 05:51:06.779588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.109 [2024-12-10 05:51:06.820324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.109 [2024-12-10 05:51:06.820358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.109 [2024-12-10 05:51:06.820366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.109 [2024-12-10 05:51:06.820373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.109 [2024-12-10 05:51:06.820378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.109 [2024-12-10 05:51:06.820874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.674 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:19.674 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:19.674 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:19.674 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:19.674 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.932 [2024-12-10 05:51:07.586744] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.932 [2024-12-10 05:51:07.594919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:19.932 null0 00:26:19.932 [2024-12-10 05:51:07.626890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1310791 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1310791 /tmp/host.sock 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1310791 ']' 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:19.932 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:19.932 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.932 [2024-12-10 05:51:07.694775] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:26:19.932 [2024-12-10 05:51:07.694815] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310791 ] 00:26:19.933 [2024-12-10 05:51:07.766490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.933 [2024-12-10 05:51:07.807257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.190 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.190 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:20.190 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:20.191 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:20.191 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.191 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.191 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.191 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:20.191 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.191 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.191 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.191 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:20.191 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.191 05:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.124 [2024-12-10 05:51:08.952842] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:21.124 [2024-12-10 05:51:08.952861] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:21.124 [2024-12-10 05:51:08.952875] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:21.382 [2024-12-10 05:51:09.080275] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:21.382 [2024-12-10 05:51:09.255154] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:21.382 [2024-12-10 05:51:09.255917] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2480b50:1 started. 00:26:21.382 [2024-12-10 05:51:09.257252] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:21.382 [2024-12-10 05:51:09.257290] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:21.382 [2024-12-10 05:51:09.257309] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:21.382 [2024-12-10 05:51:09.257321] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:21.382 [2024-12-10 05:51:09.257340] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:21.382 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.382 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:21.382 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.382 [2024-12-10 05:51:09.262480] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2480b50 was disconnected and freed. delete nvme_qpair. 00:26:21.382 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.382 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.382 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.382 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.382 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.382 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:21.640 05:51:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.573 05:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.573 05:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.573 05:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.573 05:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.573 05:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.573 05:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.573 05:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.831 05:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.831 05:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:22.831 05:51:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.763 05:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.763 05:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.763 05:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.763 05:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.763 05:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.763 05:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.763 05:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.763 05:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.763 05:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:23.763 05:51:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:24.696 05:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:24.696 05:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.696 05:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:24.696 05:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.696 05:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:24.696 05:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.696 05:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.696 05:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.953 05:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:24.953 05:51:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.886 05:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:25.886 05:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.886 05:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:25.886 05:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.886 05:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:25.886 05:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.886 05:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:25.886 05:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.886 05:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:25.886 05:51:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:26.819 05:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.819 05:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.819 05:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.819 05:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.819 05:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.819 05:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.819 05:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.819 05:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.819 [2024-12-10 05:51:14.698743] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:26.819 [2024-12-10 05:51:14.698785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.819 [2024-12-10 05:51:14.698795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.819 [2024-12-10 05:51:14.698804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.819 [2024-12-10 05:51:14.698811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.819 [2024-12-10 05:51:14.698822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.819 [2024-12-10 05:51:14.698830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.819 [2024-12-10 05:51:14.698837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.819 [2024-12-10 05:51:14.698844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.819 [2024-12-10 05:51:14.698851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.819 [2024-12-10 05:51:14.698857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.819 [2024-12-10 05:51:14.698863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245d310 is same with the state(6) to be set 00:26:26.819 [2024-12-10 05:51:14.708766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245d310 (9): Bad file descriptor 00:26:26.819 05:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:26.819 05:51:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:27.077 [2024-12-10 05:51:14.718801] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:27.077 [2024-12-10 05:51:14.718814] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:27.077 [2024-12-10 05:51:14.718820] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:27.077 [2024-12-10 05:51:14.718824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:27.077 [2024-12-10 05:51:14.718845] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:28.009 05:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.009 05:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.009 05:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.009 05:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.009 05:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.009 05:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.009 05:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.009 [2024-12-10 05:51:15.752204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:28.009 [2024-12-10 05:51:15.752281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245d310 with addr=10.0.0.2, port=4420 00:26:28.009 [2024-12-10 05:51:15.752313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245d310 is same with the state(6) to be set 00:26:28.009 [2024-12-10 05:51:15.752364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245d310 (9): Bad file descriptor 00:26:28.009 [2024-12-10 05:51:15.753310] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:28.009 [2024-12-10 05:51:15.753371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:28.009 [2024-12-10 05:51:15.753394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:28.009 [2024-12-10 05:51:15.753415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:28.009 [2024-12-10 05:51:15.753445] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:28.009 [2024-12-10 05:51:15.753461] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:28.009 [2024-12-10 05:51:15.753475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:28.009 [2024-12-10 05:51:15.753495] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:28.009 [2024-12-10 05:51:15.753509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:28.009 05:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.009 05:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:28.009 05:51:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:28.942 [2024-12-10 05:51:16.756017] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:28.942 [2024-12-10 05:51:16.756036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:28.942 [2024-12-10 05:51:16.756046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:28.942 [2024-12-10 05:51:16.756053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:28.942 [2024-12-10 05:51:16.756059] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:28.942 [2024-12-10 05:51:16.756066] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:28.942 [2024-12-10 05:51:16.756070] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:28.942 [2024-12-10 05:51:16.756074] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:28.942 [2024-12-10 05:51:16.756093] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:28.942 [2024-12-10 05:51:16.756112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.942 [2024-12-10 05:51:16.756121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.942 [2024-12-10 05:51:16.756129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.942 [2024-12-10 05:51:16.756136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.942 [2024-12-10 05:51:16.756143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.942 [2024-12-10 05:51:16.756149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.942 [2024-12-10 05:51:16.756156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.942 [2024-12-10 05:51:16.756162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.942 [2024-12-10 05:51:16.756174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.942 [2024-12-10 05:51:16.756180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.942 [2024-12-10 05:51:16.756186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:28.942 [2024-12-10 05:51:16.756529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244ca60 (9): Bad file descriptor 00:26:28.942 [2024-12-10 05:51:16.757540] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:28.942 [2024-12-10 05:51:16.757551] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:28.942 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.942 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.942 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.942 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.942 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.942 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.942 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.942 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.942 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:28.942 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:29.200 05:51:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.133 05:51:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.133 05:51:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.133 05:51:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.133 05:51:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.133 05:51:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.133 05:51:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.133 05:51:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.133 05:51:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.133 05:51:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:30.133 05:51:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.066 [2024-12-10 05:51:18.808624] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:31.066 [2024-12-10 05:51:18.808643] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:31.066 [2024-12-10 05:51:18.808656] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:31.066 [2024-12-10 05:51:18.935145] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:31.324 05:51:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.324 05:51:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.324 05:51:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.324 05:51:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.324 05:51:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.324 05:51:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.324 05:51:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.324 05:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.324 05:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:31.324 05:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.324 [2024-12-10 05:51:19.150121] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:31.324 [2024-12-10 05:51:19.150728] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x245f650:1 started. 00:26:31.324 [2024-12-10 05:51:19.151737] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:31.324 [2024-12-10 05:51:19.151766] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:31.324 [2024-12-10 05:51:19.151783] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:31.324 [2024-12-10 05:51:19.151795] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:31.324 [2024-12-10 05:51:19.151802] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:31.324 [2024-12-10 05:51:19.157580] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x245f650 was disconnected and freed. delete nvme_qpair. 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1310791 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1310791 ']' 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1310791 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1310791 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:32.257 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:32.258 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1310791' 00:26:32.258 killing process with pid 1310791 00:26:32.258 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1310791 00:26:32.258 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1310791 00:26:32.515 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:32.515 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.515 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:32.515 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.515 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:32.515 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.515 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.515 rmmod nvme_tcp 00:26:32.515 rmmod nvme_fabrics 00:26:32.515 rmmod nvme_keyring 00:26:32.515 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.516 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:32.516 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:32.516 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1310486 ']' 00:26:32.516 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1310486 00:26:32.516 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1310486 ']' 00:26:32.516 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1310486 00:26:32.516 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:32.516 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.516 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1310486 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1310486' 00:26:32.774 killing process with pid 1310486 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1310486 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1310486 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.774 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.775 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.775 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.775 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.775 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.775 05:51:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.310 05:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:35.310 00:26:35.310 real 0m22.111s 00:26:35.310 user 0m27.526s 00:26:35.310 sys 0m5.896s 00:26:35.310 05:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.310 05:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.310 ************************************ 00:26:35.310 END TEST nvmf_discovery_remove_ifc 00:26:35.310 ************************************ 00:26:35.310 05:51:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:35.310 05:51:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:35.310 05:51:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.310 05:51:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.310 ************************************ 00:26:35.311 START TEST nvmf_identify_kernel_target 00:26:35.311 ************************************ 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:35.311 * Looking for test storage... 00:26:35.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:35.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.311 --rc genhtml_branch_coverage=1 00:26:35.311 --rc genhtml_function_coverage=1 00:26:35.311 --rc genhtml_legend=1 00:26:35.311 --rc geninfo_all_blocks=1 00:26:35.311 --rc geninfo_unexecuted_blocks=1 00:26:35.311 00:26:35.311 ' 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:35.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.311 --rc genhtml_branch_coverage=1 00:26:35.311 --rc genhtml_function_coverage=1 00:26:35.311 --rc genhtml_legend=1 00:26:35.311 --rc geninfo_all_blocks=1 00:26:35.311 --rc geninfo_unexecuted_blocks=1 00:26:35.311 00:26:35.311 ' 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:35.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.311 --rc genhtml_branch_coverage=1 00:26:35.311 --rc genhtml_function_coverage=1 00:26:35.311 --rc genhtml_legend=1 00:26:35.311 --rc geninfo_all_blocks=1 00:26:35.311 --rc geninfo_unexecuted_blocks=1 00:26:35.311 00:26:35.311 ' 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:35.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.311 --rc genhtml_branch_coverage=1 00:26:35.311 --rc genhtml_function_coverage=1 00:26:35.311 --rc genhtml_legend=1 00:26:35.311 --rc geninfo_all_blocks=1 00:26:35.311 --rc geninfo_unexecuted_blocks=1 00:26:35.311 00:26:35.311 ' 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.311 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:35.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:35.312 05:51:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:41.882 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:41.882 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:41.882 Found net devices under 0000:af:00.0: cvl_0_0 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:41.882 Found net devices under 0000:af:00.1: cvl_0_1 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:41.882 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:41.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:26:41.882 00:26:41.882 --- 10.0.0.2 ping statistics --- 00:26:41.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.883 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:26:41.883 00:26:41.883 --- 10.0.0.1 ping statistics --- 00:26:41.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.883 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:41.883 05:51:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:43.794 Waiting for block devices as requested 00:26:43.794 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:43.794 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:44.053 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:44.053 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:44.053 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:44.312 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:44.312 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:44.312 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:44.312 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:44.571 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:44.571 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:44.571 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:44.830 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:44.830 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:44.830 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:45.088 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:45.088 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:45.088 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:45.088 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:45.088 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:45.088 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:45.088 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:45.088 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:45.088 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:45.088 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:45.088 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:45.088 No valid GPT data, bailing 00:26:45.088 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:45.089 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:45.348 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:45.348 05:51:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:45.348 00:26:45.348 Discovery Log Number of Records 2, Generation counter 2 00:26:45.348 =====Discovery Log Entry 0====== 00:26:45.348 trtype: tcp 00:26:45.348 adrfam: ipv4 00:26:45.348 subtype: current discovery subsystem 00:26:45.348 treq: not specified, sq flow control disable supported 00:26:45.348 portid: 1 00:26:45.348 trsvcid: 4420 00:26:45.348 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:45.348 traddr: 10.0.0.1 00:26:45.348 eflags: none 00:26:45.348 sectype: none 00:26:45.348 =====Discovery Log Entry 1====== 00:26:45.348 trtype: tcp 00:26:45.348 adrfam: ipv4 00:26:45.348 subtype: nvme subsystem 00:26:45.348 treq: not specified, sq flow control disable supported 00:26:45.348 portid: 1 00:26:45.348 trsvcid: 4420 00:26:45.348 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:45.348 traddr: 10.0.0.1 00:26:45.348 eflags: none 00:26:45.348 sectype: none 00:26:45.348 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:45.348 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:45.348 ===================================================== 00:26:45.348 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:45.348 ===================================================== 00:26:45.348 Controller Capabilities/Features 00:26:45.348 ================================ 00:26:45.348 Vendor ID: 0000 00:26:45.348 Subsystem Vendor ID: 0000 00:26:45.348 Serial Number: 8115c3705ee2c49cf69d 00:26:45.348 Model Number: Linux 00:26:45.348 Firmware Version: 6.8.9-20 00:26:45.348 Recommended Arb Burst: 0 00:26:45.348 IEEE OUI Identifier: 00 00 00 00:26:45.348 Multi-path I/O 00:26:45.348 May have multiple subsystem ports: No 00:26:45.348 May have multiple controllers: No 00:26:45.348 Associated with SR-IOV VF: No 00:26:45.348 Max Data Transfer Size: Unlimited 00:26:45.348 Max Number of Namespaces: 0 00:26:45.348 Max Number of I/O Queues: 1024 00:26:45.348 NVMe Specification Version (VS): 1.3 00:26:45.348 NVMe Specification Version (Identify): 1.3 00:26:45.348 Maximum Queue Entries: 1024 00:26:45.348 Contiguous Queues Required: No 00:26:45.348 Arbitration Mechanisms Supported 00:26:45.348 Weighted Round Robin: Not Supported 00:26:45.348 Vendor Specific: Not Supported 00:26:45.348 Reset Timeout: 7500 ms 00:26:45.348 Doorbell Stride: 4 bytes 00:26:45.348 NVM Subsystem Reset: Not Supported 00:26:45.348 Command Sets Supported 00:26:45.348 NVM Command Set: Supported 00:26:45.348 Boot Partition: Not Supported 00:26:45.348 Memory Page Size Minimum: 4096 bytes 00:26:45.348 Memory Page Size Maximum: 4096 bytes 00:26:45.348 Persistent Memory Region: Not Supported 00:26:45.348 Optional Asynchronous Events Supported 00:26:45.348 Namespace Attribute Notices: Not Supported 00:26:45.348 Firmware Activation Notices: Not Supported 00:26:45.348 ANA Change Notices: Not Supported 00:26:45.348 PLE Aggregate Log Change Notices: Not Supported 00:26:45.348 LBA Status Info Alert Notices: Not Supported 00:26:45.348 EGE Aggregate Log Change Notices: Not Supported 00:26:45.348 Normal NVM Subsystem Shutdown event: Not Supported 00:26:45.348 Zone Descriptor Change Notices: Not Supported 00:26:45.348 Discovery Log Change Notices: Supported 00:26:45.348 Controller Attributes 00:26:45.348 128-bit Host Identifier: Not Supported 00:26:45.348 Non-Operational Permissive Mode: Not Supported 00:26:45.348 NVM Sets: Not Supported 00:26:45.348 Read Recovery Levels: Not Supported 00:26:45.348 Endurance Groups: Not Supported 00:26:45.348 Predictable Latency Mode: Not Supported 00:26:45.348 Traffic Based Keep ALive: Not Supported 00:26:45.348 Namespace Granularity: Not Supported 00:26:45.348 SQ Associations: Not Supported 00:26:45.348 UUID List: Not Supported 00:26:45.348 Multi-Domain Subsystem: Not Supported 00:26:45.348 Fixed Capacity Management: Not Supported 00:26:45.348 Variable Capacity Management: Not Supported 00:26:45.348 Delete Endurance Group: Not Supported 00:26:45.348 Delete NVM Set: Not Supported 00:26:45.348 Extended LBA Formats Supported: Not Supported 00:26:45.348 Flexible Data Placement Supported: Not Supported 00:26:45.348 00:26:45.348 Controller Memory Buffer Support 00:26:45.348 ================================ 00:26:45.348 Supported: No 00:26:45.348 00:26:45.348 Persistent Memory Region Support 00:26:45.348 ================================ 00:26:45.348 Supported: No 00:26:45.348 00:26:45.348 Admin Command Set Attributes 00:26:45.348 ============================ 00:26:45.348 Security Send/Receive: Not Supported 00:26:45.348 Format NVM: Not Supported 00:26:45.348 Firmware Activate/Download: Not Supported 00:26:45.348 Namespace Management: Not Supported 00:26:45.348 Device Self-Test: Not Supported 00:26:45.348 Directives: Not Supported 00:26:45.348 NVMe-MI: Not Supported 00:26:45.348 Virtualization Management: Not Supported 00:26:45.348 Doorbell Buffer Config: Not Supported 00:26:45.348 Get LBA Status Capability: Not Supported 00:26:45.348 Command & Feature Lockdown Capability: Not Supported 00:26:45.348 Abort Command Limit: 1 00:26:45.348 Async Event Request Limit: 1 00:26:45.348 Number of Firmware Slots: N/A 00:26:45.348 Firmware Slot 1 Read-Only: N/A 00:26:45.348 Firmware Activation Without Reset: N/A 00:26:45.348 Multiple Update Detection Support: N/A 00:26:45.348 Firmware Update Granularity: No Information Provided 00:26:45.348 Per-Namespace SMART Log: No 00:26:45.348 Asymmetric Namespace Access Log Page: Not Supported 00:26:45.348 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:45.348 Command Effects Log Page: Not Supported 00:26:45.348 Get Log Page Extended Data: Supported 00:26:45.348 Telemetry Log Pages: Not Supported 00:26:45.348 Persistent Event Log Pages: Not Supported 00:26:45.348 Supported Log Pages Log Page: May Support 00:26:45.348 Commands Supported & Effects Log Page: Not Supported 00:26:45.348 Feature Identifiers & Effects Log Page:May Support 00:26:45.348 NVMe-MI Commands & Effects Log Page: May Support 00:26:45.348 Data Area 4 for Telemetry Log: Not Supported 00:26:45.348 Error Log Page Entries Supported: 1 00:26:45.348 Keep Alive: Not Supported 00:26:45.348 00:26:45.348 NVM Command Set Attributes 00:26:45.348 ========================== 00:26:45.348 Submission Queue Entry Size 00:26:45.348 Max: 1 00:26:45.348 Min: 1 00:26:45.348 Completion Queue Entry Size 00:26:45.348 Max: 1 00:26:45.348 Min: 1 00:26:45.348 Number of Namespaces: 0 00:26:45.348 Compare Command: Not Supported 00:26:45.348 Write Uncorrectable Command: Not Supported 00:26:45.348 Dataset Management Command: Not Supported 00:26:45.348 Write Zeroes Command: Not Supported 00:26:45.348 Set Features Save Field: Not Supported 00:26:45.348 Reservations: Not Supported 00:26:45.348 Timestamp: Not Supported 00:26:45.348 Copy: Not Supported 00:26:45.349 Volatile Write Cache: Not Present 00:26:45.349 Atomic Write Unit (Normal): 1 00:26:45.349 Atomic Write Unit (PFail): 1 00:26:45.349 Atomic Compare & Write Unit: 1 00:26:45.349 Fused Compare & Write: Not Supported 00:26:45.349 Scatter-Gather List 00:26:45.349 SGL Command Set: Supported 00:26:45.349 SGL Keyed: Not Supported 00:26:45.349 SGL Bit Bucket Descriptor: Not Supported 00:26:45.349 SGL Metadata Pointer: Not Supported 00:26:45.349 Oversized SGL: Not Supported 00:26:45.349 SGL Metadata Address: Not Supported 00:26:45.349 SGL Offset: Supported 00:26:45.349 Transport SGL Data Block: Not Supported 00:26:45.349 Replay Protected Memory Block: Not Supported 00:26:45.349 00:26:45.349 Firmware Slot Information 00:26:45.349 ========================= 00:26:45.349 Active slot: 0 00:26:45.349 00:26:45.349 00:26:45.349 Error Log 00:26:45.349 ========= 00:26:45.349 00:26:45.349 Active Namespaces 00:26:45.349 ================= 00:26:45.349 Discovery Log Page 00:26:45.349 ================== 00:26:45.349 Generation Counter: 2 00:26:45.349 Number of Records: 2 00:26:45.349 Record Format: 0 00:26:45.349 00:26:45.349 Discovery Log Entry 0 00:26:45.349 ---------------------- 00:26:45.349 Transport Type: 3 (TCP) 00:26:45.349 Address Family: 1 (IPv4) 00:26:45.349 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:45.349 Entry Flags: 00:26:45.349 Duplicate Returned Information: 0 00:26:45.349 Explicit Persistent Connection Support for Discovery: 0 00:26:45.349 Transport Requirements: 00:26:45.349 Secure Channel: Not Specified 00:26:45.349 Port ID: 1 (0x0001) 00:26:45.349 Controller ID: 65535 (0xffff) 00:26:45.349 Admin Max SQ Size: 32 00:26:45.349 Transport Service Identifier: 4420 00:26:45.349 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:45.349 Transport Address: 10.0.0.1 00:26:45.349 Discovery Log Entry 1 00:26:45.349 ---------------------- 00:26:45.349 Transport Type: 3 (TCP) 00:26:45.349 Address Family: 1 (IPv4) 00:26:45.349 Subsystem Type: 2 (NVM Subsystem) 00:26:45.349 Entry Flags: 00:26:45.349 Duplicate Returned Information: 0 00:26:45.349 Explicit Persistent Connection Support for Discovery: 0 00:26:45.349 Transport Requirements: 00:26:45.349 Secure Channel: Not Specified 00:26:45.349 Port ID: 1 (0x0001) 00:26:45.349 Controller ID: 65535 (0xffff) 00:26:45.349 Admin Max SQ Size: 32 00:26:45.349 Transport Service Identifier: 4420 00:26:45.349 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:45.349 Transport Address: 10.0.0.1 00:26:45.349 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:45.608 get_feature(0x01) failed 00:26:45.608 get_feature(0x02) failed 00:26:45.608 get_feature(0x04) failed 00:26:45.608 ===================================================== 00:26:45.608 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:45.608 ===================================================== 00:26:45.608 Controller Capabilities/Features 00:26:45.608 ================================ 00:26:45.608 Vendor ID: 0000 00:26:45.608 Subsystem Vendor ID: 0000 00:26:45.608 Serial Number: cbfcad077cc9f3cf7df2 00:26:45.608 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:45.608 Firmware Version: 6.8.9-20 00:26:45.608 Recommended Arb Burst: 6 00:26:45.608 IEEE OUI Identifier: 00 00 00 00:26:45.609 Multi-path I/O 00:26:45.609 May have multiple subsystem ports: Yes 00:26:45.609 May have multiple controllers: Yes 00:26:45.609 Associated with SR-IOV VF: No 00:26:45.609 Max Data Transfer Size: Unlimited 00:26:45.609 Max Number of Namespaces: 1024 00:26:45.609 Max Number of I/O Queues: 128 00:26:45.609 NVMe Specification Version (VS): 1.3 00:26:45.609 NVMe Specification Version (Identify): 1.3 00:26:45.609 Maximum Queue Entries: 1024 00:26:45.609 Contiguous Queues Required: No 00:26:45.609 Arbitration Mechanisms Supported 00:26:45.609 Weighted Round Robin: Not Supported 00:26:45.609 Vendor Specific: Not Supported 00:26:45.609 Reset Timeout: 7500 ms 00:26:45.609 Doorbell Stride: 4 bytes 00:26:45.609 NVM Subsystem Reset: Not Supported 00:26:45.609 Command Sets Supported 00:26:45.609 NVM Command Set: Supported 00:26:45.609 Boot Partition: Not Supported 00:26:45.609 Memory Page Size Minimum: 4096 bytes 00:26:45.609 Memory Page Size Maximum: 4096 bytes 00:26:45.609 Persistent Memory Region: Not Supported 00:26:45.609 Optional Asynchronous Events Supported 00:26:45.609 Namespace Attribute Notices: Supported 00:26:45.609 Firmware Activation Notices: Not Supported 00:26:45.609 ANA Change Notices: Supported 00:26:45.609 PLE Aggregate Log Change Notices: Not Supported 00:26:45.609 LBA Status Info Alert Notices: Not Supported 00:26:45.609 EGE Aggregate Log Change Notices: Not Supported 00:26:45.609 Normal NVM Subsystem Shutdown event: Not Supported 00:26:45.609 Zone Descriptor Change Notices: Not Supported 00:26:45.609 Discovery Log Change Notices: Not Supported 00:26:45.609 Controller Attributes 00:26:45.609 128-bit Host Identifier: Supported 00:26:45.609 Non-Operational Permissive Mode: Not Supported 00:26:45.609 NVM Sets: Not Supported 00:26:45.609 Read Recovery Levels: Not Supported 00:26:45.609 Endurance Groups: Not Supported 00:26:45.609 Predictable Latency Mode: Not Supported 00:26:45.609 Traffic Based Keep ALive: Supported 00:26:45.609 Namespace Granularity: Not Supported 00:26:45.609 SQ Associations: Not Supported 00:26:45.609 UUID List: Not Supported 00:26:45.609 Multi-Domain Subsystem: Not Supported 00:26:45.609 Fixed Capacity Management: Not Supported 00:26:45.609 Variable Capacity Management: Not Supported 00:26:45.609 Delete Endurance Group: Not Supported 00:26:45.609 Delete NVM Set: Not Supported 00:26:45.609 Extended LBA Formats Supported: Not Supported 00:26:45.609 Flexible Data Placement Supported: Not Supported 00:26:45.609 00:26:45.609 Controller Memory Buffer Support 00:26:45.609 ================================ 00:26:45.609 Supported: No 00:26:45.609 00:26:45.609 Persistent Memory Region Support 00:26:45.609 ================================ 00:26:45.609 Supported: No 00:26:45.609 00:26:45.609 Admin Command Set Attributes 00:26:45.609 ============================ 00:26:45.609 Security Send/Receive: Not Supported 00:26:45.609 Format NVM: Not Supported 00:26:45.609 Firmware Activate/Download: Not Supported 00:26:45.609 Namespace Management: Not Supported 00:26:45.609 Device Self-Test: Not Supported 00:26:45.609 Directives: Not Supported 00:26:45.609 NVMe-MI: Not Supported 00:26:45.609 Virtualization Management: Not Supported 00:26:45.609 Doorbell Buffer Config: Not Supported 00:26:45.609 Get LBA Status Capability: Not Supported 00:26:45.609 Command & Feature Lockdown Capability: Not Supported 00:26:45.609 Abort Command Limit: 4 00:26:45.609 Async Event Request Limit: 4 00:26:45.609 Number of Firmware Slots: N/A 00:26:45.609 Firmware Slot 1 Read-Only: N/A 00:26:45.609 Firmware Activation Without Reset: N/A 00:26:45.609 Multiple Update Detection Support: N/A 00:26:45.609 Firmware Update Granularity: No Information Provided 00:26:45.609 Per-Namespace SMART Log: Yes 00:26:45.609 Asymmetric Namespace Access Log Page: Supported 00:26:45.609 ANA Transition Time : 10 sec 00:26:45.609 00:26:45.609 Asymmetric Namespace Access Capabilities 00:26:45.609 ANA Optimized State : Supported 00:26:45.609 ANA Non-Optimized State : Supported 00:26:45.609 ANA Inaccessible State : Supported 00:26:45.609 ANA Persistent Loss State : Supported 00:26:45.609 ANA Change State : Supported 00:26:45.609 ANAGRPID is not changed : No 00:26:45.609 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:45.609 00:26:45.609 ANA Group Identifier Maximum : 128 00:26:45.609 Number of ANA Group Identifiers : 128 00:26:45.609 Max Number of Allowed Namespaces : 1024 00:26:45.609 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:45.609 Command Effects Log Page: Supported 00:26:45.609 Get Log Page Extended Data: Supported 00:26:45.609 Telemetry Log Pages: Not Supported 00:26:45.609 Persistent Event Log Pages: Not Supported 00:26:45.609 Supported Log Pages Log Page: May Support 00:26:45.609 Commands Supported & Effects Log Page: Not Supported 00:26:45.609 Feature Identifiers & Effects Log Page:May Support 00:26:45.609 NVMe-MI Commands & Effects Log Page: May Support 00:26:45.609 Data Area 4 for Telemetry Log: Not Supported 00:26:45.609 Error Log Page Entries Supported: 128 00:26:45.609 Keep Alive: Supported 00:26:45.609 Keep Alive Granularity: 1000 ms 00:26:45.609 00:26:45.609 NVM Command Set Attributes 00:26:45.609 ========================== 00:26:45.609 Submission Queue Entry Size 00:26:45.609 Max: 64 00:26:45.609 Min: 64 00:26:45.609 Completion Queue Entry Size 00:26:45.609 Max: 16 00:26:45.609 Min: 16 00:26:45.609 Number of Namespaces: 1024 00:26:45.609 Compare Command: Not Supported 00:26:45.609 Write Uncorrectable Command: Not Supported 00:26:45.609 Dataset Management Command: Supported 00:26:45.609 Write Zeroes Command: Supported 00:26:45.609 Set Features Save Field: Not Supported 00:26:45.609 Reservations: Not Supported 00:26:45.609 Timestamp: Not Supported 00:26:45.609 Copy: Not Supported 00:26:45.609 Volatile Write Cache: Present 00:26:45.609 Atomic Write Unit (Normal): 1 00:26:45.609 Atomic Write Unit (PFail): 1 00:26:45.609 Atomic Compare & Write Unit: 1 00:26:45.609 Fused Compare & Write: Not Supported 00:26:45.609 Scatter-Gather List 00:26:45.609 SGL Command Set: Supported 00:26:45.609 SGL Keyed: Not Supported 00:26:45.609 SGL Bit Bucket Descriptor: Not Supported 00:26:45.609 SGL Metadata Pointer: Not Supported 00:26:45.609 Oversized SGL: Not Supported 00:26:45.609 SGL Metadata Address: Not Supported 00:26:45.609 SGL Offset: Supported 00:26:45.609 Transport SGL Data Block: Not Supported 00:26:45.609 Replay Protected Memory Block: Not Supported 00:26:45.609 00:26:45.609 Firmware Slot Information 00:26:45.609 ========================= 00:26:45.609 Active slot: 0 00:26:45.609 00:26:45.609 Asymmetric Namespace Access 00:26:45.609 =========================== 00:26:45.609 Change Count : 0 00:26:45.609 Number of ANA Group Descriptors : 1 00:26:45.609 ANA Group Descriptor : 0 00:26:45.609 ANA Group ID : 1 00:26:45.609 Number of NSID Values : 1 00:26:45.609 Change Count : 0 00:26:45.609 ANA State : 1 00:26:45.609 Namespace Identifier : 1 00:26:45.609 00:26:45.609 Commands Supported and Effects 00:26:45.609 ============================== 00:26:45.609 Admin Commands 00:26:45.609 -------------- 00:26:45.609 Get Log Page (02h): Supported 00:26:45.609 Identify (06h): Supported 00:26:45.609 Abort (08h): Supported 00:26:45.609 Set Features (09h): Supported 00:26:45.609 Get Features (0Ah): Supported 00:26:45.609 Asynchronous Event Request (0Ch): Supported 00:26:45.609 Keep Alive (18h): Supported 00:26:45.609 I/O Commands 00:26:45.609 ------------ 00:26:45.609 Flush (00h): Supported 00:26:45.609 Write (01h): Supported LBA-Change 00:26:45.609 Read (02h): Supported 00:26:45.609 Write Zeroes (08h): Supported LBA-Change 00:26:45.609 Dataset Management (09h): Supported 00:26:45.609 00:26:45.609 Error Log 00:26:45.609 ========= 00:26:45.609 Entry: 0 00:26:45.609 Error Count: 0x3 00:26:45.609 Submission Queue Id: 0x0 00:26:45.609 Command Id: 0x5 00:26:45.609 Phase Bit: 0 00:26:45.609 Status Code: 0x2 00:26:45.609 Status Code Type: 0x0 00:26:45.609 Do Not Retry: 1 00:26:45.609 Error Location: 0x28 00:26:45.609 LBA: 0x0 00:26:45.609 Namespace: 0x0 00:26:45.609 Vendor Log Page: 0x0 00:26:45.609 ----------- 00:26:45.609 Entry: 1 00:26:45.609 Error Count: 0x2 00:26:45.609 Submission Queue Id: 0x0 00:26:45.609 Command Id: 0x5 00:26:45.609 Phase Bit: 0 00:26:45.609 Status Code: 0x2 00:26:45.609 Status Code Type: 0x0 00:26:45.609 Do Not Retry: 1 00:26:45.609 Error Location: 0x28 00:26:45.609 LBA: 0x0 00:26:45.609 Namespace: 0x0 00:26:45.609 Vendor Log Page: 0x0 00:26:45.609 ----------- 00:26:45.609 Entry: 2 00:26:45.609 Error Count: 0x1 00:26:45.609 Submission Queue Id: 0x0 00:26:45.609 Command Id: 0x4 00:26:45.610 Phase Bit: 0 00:26:45.610 Status Code: 0x2 00:26:45.610 Status Code Type: 0x0 00:26:45.610 Do Not Retry: 1 00:26:45.610 Error Location: 0x28 00:26:45.610 LBA: 0x0 00:26:45.610 Namespace: 0x0 00:26:45.610 Vendor Log Page: 0x0 00:26:45.610 00:26:45.610 Number of Queues 00:26:45.610 ================ 00:26:45.610 Number of I/O Submission Queues: 128 00:26:45.610 Number of I/O Completion Queues: 128 00:26:45.610 00:26:45.610 ZNS Specific Controller Data 00:26:45.610 ============================ 00:26:45.610 Zone Append Size Limit: 0 00:26:45.610 00:26:45.610 00:26:45.610 Active Namespaces 00:26:45.610 ================= 00:26:45.610 get_feature(0x05) failed 00:26:45.610 Namespace ID:1 00:26:45.610 Command Set Identifier: NVM (00h) 00:26:45.610 Deallocate: Supported 00:26:45.610 Deallocated/Unwritten Error: Not Supported 00:26:45.610 Deallocated Read Value: Unknown 00:26:45.610 Deallocate in Write Zeroes: Not Supported 00:26:45.610 Deallocated Guard Field: 0xFFFF 00:26:45.610 Flush: Supported 00:26:45.610 Reservation: Not Supported 00:26:45.610 Namespace Sharing Capabilities: Multiple Controllers 00:26:45.610 Size (in LBAs): 1953525168 (931GiB) 00:26:45.610 Capacity (in LBAs): 1953525168 (931GiB) 00:26:45.610 Utilization (in LBAs): 1953525168 (931GiB) 00:26:45.610 UUID: f0ad74ca-9284-4a84-b95d-dcdbf7f7e5b2 00:26:45.610 Thin Provisioning: Not Supported 00:26:45.610 Per-NS Atomic Units: Yes 00:26:45.610 Atomic Boundary Size (Normal): 0 00:26:45.610 Atomic Boundary Size (PFail): 0 00:26:45.610 Atomic Boundary Offset: 0 00:26:45.610 NGUID/EUI64 Never Reused: No 00:26:45.610 ANA group ID: 1 00:26:45.610 Namespace Write Protected: No 00:26:45.610 Number of LBA Formats: 1 00:26:45.610 Current LBA Format: LBA Format #00 00:26:45.610 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:45.610 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:45.610 rmmod nvme_tcp 00:26:45.610 rmmod nvme_fabrics 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.610 05:51:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.514 05:51:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:47.514 05:51:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:47.514 05:51:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:47.514 05:51:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:47.514 05:51:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:47.514 05:51:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:47.773 05:51:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:47.773 05:51:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:47.773 05:51:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:47.773 05:51:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:47.773 05:51:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:50.307 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:50.566 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:51.503 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:51.503 00:26:51.503 real 0m16.586s 00:26:51.503 user 0m4.286s 00:26:51.503 sys 0m8.660s 00:26:51.503 05:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:51.503 05:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:51.503 ************************************ 00:26:51.503 END TEST nvmf_identify_kernel_target 00:26:51.503 ************************************ 00:26:51.503 05:51:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:51.503 05:51:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:51.503 05:51:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:51.503 05:51:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.503 ************************************ 00:26:51.503 START TEST nvmf_auth_host 00:26:51.503 ************************************ 00:26:51.503 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:51.762 * Looking for test storage... 00:26:51.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:51.762 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:51.762 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:51.762 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:51.762 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:51.762 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:51.762 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:51.762 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:51.762 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:51.762 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:51.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.763 --rc genhtml_branch_coverage=1 00:26:51.763 --rc genhtml_function_coverage=1 00:26:51.763 --rc genhtml_legend=1 00:26:51.763 --rc geninfo_all_blocks=1 00:26:51.763 --rc geninfo_unexecuted_blocks=1 00:26:51.763 00:26:51.763 ' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:51.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.763 --rc genhtml_branch_coverage=1 00:26:51.763 --rc genhtml_function_coverage=1 00:26:51.763 --rc genhtml_legend=1 00:26:51.763 --rc geninfo_all_blocks=1 00:26:51.763 --rc geninfo_unexecuted_blocks=1 00:26:51.763 00:26:51.763 ' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:51.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.763 --rc genhtml_branch_coverage=1 00:26:51.763 --rc genhtml_function_coverage=1 00:26:51.763 --rc genhtml_legend=1 00:26:51.763 --rc geninfo_all_blocks=1 00:26:51.763 --rc geninfo_unexecuted_blocks=1 00:26:51.763 00:26:51.763 ' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:51.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.763 --rc genhtml_branch_coverage=1 00:26:51.763 --rc genhtml_function_coverage=1 00:26:51.763 --rc genhtml_legend=1 00:26:51.763 --rc geninfo_all_blocks=1 00:26:51.763 --rc geninfo_unexecuted_blocks=1 00:26:51.763 00:26:51.763 ' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:51.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.763 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.764 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:51.764 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:51.764 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:51.764 05:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:58.333 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:58.334 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:58.334 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:58.334 Found net devices under 0000:af:00.0: cvl_0_0 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:58.334 Found net devices under 0000:af:00.1: cvl_0_1 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:26:58.334 00:26:58.334 --- 10.0.0.2 ping statistics --- 00:26:58.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.334 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:26:58.334 00:26:58.334 --- 10.0.0.1 ping statistics --- 00:26:58.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.334 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1322727 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1322727 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1322727 ']' 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=38d6601b61c61a0bae86b04d201bdbe7 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Bye 00:26:58.334 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 38d6601b61c61a0bae86b04d201bdbe7 0 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 38d6601b61c61a0bae86b04d201bdbe7 0 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=38d6601b61c61a0bae86b04d201bdbe7 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Bye 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Bye 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Bye 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9f9a0a12c22d067ace867b15985a47dd6fef1f4b8de00f2c4c54362423a05385 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dMg 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9f9a0a12c22d067ace867b15985a47dd6fef1f4b8de00f2c4c54362423a05385 3 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9f9a0a12c22d067ace867b15985a47dd6fef1f4b8de00f2c4c54362423a05385 3 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9f9a0a12c22d067ace867b15985a47dd6fef1f4b8de00f2c4c54362423a05385 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dMg 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dMg 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.dMg 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a54dfb452e0acbf7db9f465ac5eb68f8790282b9b52a3d0b 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Z1N 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a54dfb452e0acbf7db9f465ac5eb68f8790282b9b52a3d0b 0 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a54dfb452e0acbf7db9f465ac5eb68f8790282b9b52a3d0b 0 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a54dfb452e0acbf7db9f465ac5eb68f8790282b9b52a3d0b 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Z1N 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Z1N 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Z1N 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a011c46f072ca23a024656fd223f446a80862cb0766c0966 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.thL 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a011c46f072ca23a024656fd223f446a80862cb0766c0966 2 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a011c46f072ca23a024656fd223f446a80862cb0766c0966 2 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a011c46f072ca23a024656fd223f446a80862cb0766c0966 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:58.335 05:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.thL 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.thL 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.thL 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=403177c5c2a0c0d77efa2e610866438a 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BTG 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 403177c5c2a0c0d77efa2e610866438a 1 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 403177c5c2a0c0d77efa2e610866438a 1 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=403177c5c2a0c0d77efa2e610866438a 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BTG 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BTG 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.BTG 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=af8bf80c6d99746059716dee5ecd8191 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9Fe 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key af8bf80c6d99746059716dee5ecd8191 1 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 af8bf80c6d99746059716dee5ecd8191 1 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=af8bf80c6d99746059716dee5ecd8191 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9Fe 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9Fe 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9Fe 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:58.335 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d6725f1cef7e82c0c5f7f371683225968771e622fa497e3c 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vrS 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d6725f1cef7e82c0c5f7f371683225968771e622fa497e3c 2 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d6725f1cef7e82c0c5f7f371683225968771e622fa497e3c 2 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d6725f1cef7e82c0c5f7f371683225968771e622fa497e3c 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vrS 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vrS 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.vrS 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:58.336 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0cc93018adfcdd0c66617e9189e75f71 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.JBO 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0cc93018adfcdd0c66617e9189e75f71 0 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0cc93018adfcdd0c66617e9189e75f71 0 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0cc93018adfcdd0c66617e9189e75f71 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.JBO 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.JBO 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.JBO 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f20ac4c54d6c846a985fd646e9e5c436da5ba7896856bfe07ed3d03c48474fb5 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Too 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f20ac4c54d6c846a985fd646e9e5c436da5ba7896856bfe07ed3d03c48474fb5 3 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f20ac4c54d6c846a985fd646e9e5c436da5ba7896856bfe07ed3d03c48474fb5 3 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f20ac4c54d6c846a985fd646e9e5c436da5ba7896856bfe07ed3d03c48474fb5 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Too 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Too 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Too 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1322727 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1322727 ']' 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.595 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Bye 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.dMg ]] 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dMg 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Z1N 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.thL ]] 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.thL 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.BTG 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9Fe ]] 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Fe 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.vrS 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.854 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.JBO ]] 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.JBO 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Too 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:58.855 05:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:01.386 Waiting for block devices as requested 00:27:01.386 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:27:01.645 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:01.645 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:01.645 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:01.903 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:01.903 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:01.903 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:02.161 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:02.161 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:02.161 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:02.161 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:02.419 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:02.419 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:02.419 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:02.419 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:02.677 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:02.677 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:03.244 No valid GPT data, bailing 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:03.244 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:03.503 00:27:03.503 Discovery Log Number of Records 2, Generation counter 2 00:27:03.503 =====Discovery Log Entry 0====== 00:27:03.503 trtype: tcp 00:27:03.503 adrfam: ipv4 00:27:03.503 subtype: current discovery subsystem 00:27:03.503 treq: not specified, sq flow control disable supported 00:27:03.503 portid: 1 00:27:03.503 trsvcid: 4420 00:27:03.503 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:03.503 traddr: 10.0.0.1 00:27:03.503 eflags: none 00:27:03.503 sectype: none 00:27:03.503 =====Discovery Log Entry 1====== 00:27:03.503 trtype: tcp 00:27:03.503 adrfam: ipv4 00:27:03.503 subtype: nvme subsystem 00:27:03.503 treq: not specified, sq flow control disable supported 00:27:03.503 portid: 1 00:27:03.503 trsvcid: 4420 00:27:03.503 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:03.503 traddr: 10.0.0.1 00:27:03.503 eflags: none 00:27:03.503 sectype: none 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.503 nvme0n1 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.503 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.762 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.762 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:03.762 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.763 nvme0n1 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.763 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.022 nvme0n1 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:04.022 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.023 05:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.281 nvme0n1 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:04.281 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.282 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.541 nvme0n1 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.541 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.800 nvme0n1 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.800 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.801 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.060 nvme0n1 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.060 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.319 nvme0n1 00:27:05.319 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.319 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.319 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.319 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.319 05:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.319 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.578 nvme0n1 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.578 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.579 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.579 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.579 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.579 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.579 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.579 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.579 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:05.579 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.579 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.837 nvme0n1 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.838 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.097 nvme0n1 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.097 05:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.356 nvme0n1 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.356 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.357 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.357 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.357 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.357 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.357 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.357 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.357 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.357 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.616 nvme0n1 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.616 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.874 nvme0n1 00:27:06.874 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.874 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.874 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.874 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.874 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.874 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:07.133 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.134 05:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.392 nvme0n1 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.392 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.651 nvme0n1 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.651 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.218 nvme0n1 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.218 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.219 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.219 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.219 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.219 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.219 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.219 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.219 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.219 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.219 05:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.477 nvme0n1 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:08.477 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.478 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.044 nvme0n1 00:27:09.044 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.044 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.044 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.044 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.044 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.044 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.044 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.045 05:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.306 nvme0n1 00:27:09.306 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.306 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.306 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.306 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.306 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.661 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.002 nvme0n1 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.002 05:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.575 nvme0n1 00:27:10.575 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.575 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.576 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.143 nvme0n1 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.143 05:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.143 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.710 nvme0n1 00:27:11.710 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.710 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.710 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.710 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.710 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.969 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.970 05:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.537 nvme0n1 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.537 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.538 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.105 nvme0n1 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.105 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.106 05:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.364 nvme0n1 00:27:13.364 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.364 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.364 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.364 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.364 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.364 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.365 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.623 nvme0n1 00:27:13.623 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.623 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.623 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.623 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.623 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.623 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.623 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.623 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.624 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.882 nvme0n1 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:13.882 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.883 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.141 nvme0n1 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.141 05:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.141 nvme0n1 00:27:14.141 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.141 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.141 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.141 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.141 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.400 nvme0n1 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.400 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.659 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.660 nvme0n1 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.660 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.919 nvme0n1 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.919 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.179 05:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.179 nvme0n1 00:27:15.179 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.179 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.179 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.179 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.179 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.179 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.179 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.179 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.438 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.439 nvme0n1 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.439 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.698 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.957 nvme0n1 00:27:15.957 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.957 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.957 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.957 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.957 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.957 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.958 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.217 nvme0n1 00:27:16.217 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.217 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.217 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.217 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.217 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.217 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.217 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.217 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.217 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.217 05:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.217 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.476 nvme0n1 00:27:16.476 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.477 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.736 nvme0n1 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.736 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.995 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.254 nvme0n1 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:17.254 05:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.254 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.512 nvme0n1 00:27:17.512 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.512 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.512 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.512 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.512 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.512 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.770 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.771 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.030 nvme0n1 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.030 05:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.598 nvme0n1 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.598 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.857 nvme0n1 00:27:18.857 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.857 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.857 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.857 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.857 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.857 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.117 05:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.376 nvme0n1 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.376 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.944 nvme0n1 00:27:19.944 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.944 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.944 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.944 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.944 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.203 05:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.770 nvme0n1 00:27:20.770 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.770 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.770 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.770 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.771 05:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.338 nvme0n1 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.338 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.905 nvme0n1 00:27:21.905 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.905 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.905 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.905 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.905 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.906 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.906 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.906 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.906 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.906 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.165 05:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.733 nvme0n1 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:22.733 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.734 nvme0n1 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.734 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.993 nvme0n1 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:22.993 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.253 05:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.253 nvme0n1 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.253 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.512 nvme0n1 00:27:23.512 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.512 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.512 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.512 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.512 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.512 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.512 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.513 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.772 nvme0n1 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.772 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.031 nvme0n1 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.031 05:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.291 nvme0n1 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.291 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.550 nvme0n1 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.550 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.809 nvme0n1 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:24.809 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.810 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.069 nvme0n1 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.069 05:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.327 nvme0n1 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.327 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.328 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.587 nvme0n1 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.587 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.846 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.846 nvme0n1 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.105 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.106 05:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.364 nvme0n1 00:27:26.364 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.364 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.364 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.364 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.364 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.365 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.624 nvme0n1 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.624 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.191 nvme0n1 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.191 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.192 05:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.449 nvme0n1 00:27:27.450 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.450 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.450 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.450 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.450 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.450 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.708 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.967 nvme0n1 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.967 05:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.534 nvme0n1 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.535 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.793 nvme0n1 00:27:28.793 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.793 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.793 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.793 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.793 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.793 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:29.052 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzhkNjYwMWI2MWM2MWEwYmFlODZiMDRkMjAxYmRiZTcuoWV1: 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: ]] 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWY5YTBhMTJjMjJkMDY3YWNlODY3YjE1OTg1YTQ3ZGQ2ZmVmMWY0YjhkZTAwZjJjNGM1NDM2MjQyM2EwNTM4NZ92ftI=: 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.053 05:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.620 nvme0n1 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.620 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.188 nvme0n1 00:27:30.188 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.188 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.188 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.188 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.188 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.188 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.188 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.188 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.188 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.188 05:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.188 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.755 nvme0n1 00:27:30.755 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.755 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.755 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.755 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.755 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.755 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.755 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.755 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.755 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.755 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDY3MjVmMWNlZjdlODJjMGM1ZjdmMzcxNjgzMjI1OTY4NzcxZTYyMmZhNDk3ZTNjzLKrSA==: 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: ]] 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGNjOTMwMThhZGZjZGQwYzY2NjE3ZTkxODllNzVmNzGq5e1L: 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.014 05:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.580 nvme0n1 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.580 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjIwYWM0YzU0ZDZjODQ2YTk4NWZkNjQ2ZTllNWM0MzZkYTViYTc4OTY4NTZiZmUwN2VkM2QwM2M0ODQ3NGZiNTk0TG0=: 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.581 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.148 nvme0n1 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.148 05:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.148 request: 00:27:32.148 { 00:27:32.148 "name": "nvme0", 00:27:32.148 "trtype": "tcp", 00:27:32.148 "traddr": "10.0.0.1", 00:27:32.148 "adrfam": "ipv4", 00:27:32.148 "trsvcid": "4420", 00:27:32.148 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:32.148 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:32.148 "prchk_reftag": false, 00:27:32.148 "prchk_guard": false, 00:27:32.148 "hdgst": false, 00:27:32.148 "ddgst": false, 00:27:32.148 "allow_unrecognized_csi": false, 00:27:32.148 "method": "bdev_nvme_attach_controller", 00:27:32.148 "req_id": 1 00:27:32.148 } 00:27:32.148 Got JSON-RPC error response 00:27:32.148 response: 00:27:32.148 { 00:27:32.148 "code": -5, 00:27:32.148 "message": "Input/output error" 00:27:32.148 } 00:27:32.148 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:32.148 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:32.148 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:32.149 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:32.149 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:32.149 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:32.149 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.149 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.149 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.149 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.408 request: 00:27:32.408 { 00:27:32.408 "name": "nvme0", 00:27:32.408 "trtype": "tcp", 00:27:32.408 "traddr": "10.0.0.1", 00:27:32.408 "adrfam": "ipv4", 00:27:32.408 "trsvcid": "4420", 00:27:32.408 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:32.408 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:32.408 "prchk_reftag": false, 00:27:32.408 "prchk_guard": false, 00:27:32.408 "hdgst": false, 00:27:32.408 "ddgst": false, 00:27:32.408 "dhchap_key": "key2", 00:27:32.408 "allow_unrecognized_csi": false, 00:27:32.408 "method": "bdev_nvme_attach_controller", 00:27:32.408 "req_id": 1 00:27:32.408 } 00:27:32.408 Got JSON-RPC error response 00:27:32.408 response: 00:27:32.408 { 00:27:32.408 "code": -5, 00:27:32.408 "message": "Input/output error" 00:27:32.408 } 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.408 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.409 request: 00:27:32.409 { 00:27:32.409 "name": "nvme0", 00:27:32.409 "trtype": "tcp", 00:27:32.409 "traddr": "10.0.0.1", 00:27:32.409 "adrfam": "ipv4", 00:27:32.409 "trsvcid": "4420", 00:27:32.409 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:32.409 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:32.409 "prchk_reftag": false, 00:27:32.409 "prchk_guard": false, 00:27:32.409 "hdgst": false, 00:27:32.409 "ddgst": false, 00:27:32.409 "dhchap_key": "key1", 00:27:32.409 "dhchap_ctrlr_key": "ckey2", 00:27:32.409 "allow_unrecognized_csi": false, 00:27:32.409 "method": "bdev_nvme_attach_controller", 00:27:32.409 "req_id": 1 00:27:32.409 } 00:27:32.409 Got JSON-RPC error response 00:27:32.409 response: 00:27:32.409 { 00:27:32.409 "code": -5, 00:27:32.409 "message": "Input/output error" 00:27:32.409 } 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.409 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 nvme0n1 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.668 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.927 request: 00:27:32.927 { 00:27:32.927 "name": "nvme0", 00:27:32.927 "dhchap_key": "key1", 00:27:32.927 "dhchap_ctrlr_key": "ckey2", 00:27:32.927 "method": "bdev_nvme_set_keys", 00:27:32.927 "req_id": 1 00:27:32.927 } 00:27:32.927 Got JSON-RPC error response 00:27:32.927 response: 00:27:32.927 { 00:27:32.927 "code": -13, 00:27:32.927 "message": "Permission denied" 00:27:32.927 } 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:32.927 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:33.863 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.863 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:33.863 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.863 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.863 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.863 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:33.863 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU0ZGZiNDUyZTBhY2JmN2RiOWY0NjVhYzVlYjY4Zjg3OTAyODJiOWI1MmEzZDBivT1lWQ==: 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: ]] 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTAxMWM0NmYwNzJjYTIzYTAyNDY1NmZkMjIzZjQ0NmE4MDg2MmNiMDc2NmMwOTY2ze/T0g==: 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.240 nvme0n1 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDAzMTc3YzVjMmEwYzBkNzdlZmEyZTYxMDg2NjQzOGHWJCQ4: 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: ]] 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWY4YmY4MGM2ZDk5NzQ2MDU5NzE2ZGVlNWVjZDgxOTEFhMTZ: 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.240 request: 00:27:35.240 { 00:27:35.240 "name": "nvme0", 00:27:35.240 "dhchap_key": "key2", 00:27:35.240 "dhchap_ctrlr_key": "ckey1", 00:27:35.240 "method": "bdev_nvme_set_keys", 00:27:35.240 "req_id": 1 00:27:35.240 } 00:27:35.240 Got JSON-RPC error response 00:27:35.240 response: 00:27:35.240 { 00:27:35.240 "code": -13, 00:27:35.240 "message": "Permission denied" 00:27:35.240 } 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:35.240 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.241 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:35.241 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.241 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.241 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.241 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:35.241 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.177 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:36.436 rmmod nvme_tcp 00:27:36.436 rmmod nvme_fabrics 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1322727 ']' 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1322727 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1322727 ']' 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1322727 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1322727 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1322727' 00:27:36.436 killing process with pid 1322727 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1322727 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1322727 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.436 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:38.972 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:41.509 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:41.509 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:42.447 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:42.447 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Bye /tmp/spdk.key-null.Z1N /tmp/spdk.key-sha256.BTG /tmp/spdk.key-sha384.vrS /tmp/spdk.key-sha512.Too /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:42.447 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:45.738 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:45.738 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:45.738 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:45.738 00:27:45.738 real 0m53.783s 00:27:45.738 user 0m48.615s 00:27:45.738 sys 0m12.506s 00:27:45.738 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.738 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.738 ************************************ 00:27:45.738 END TEST nvmf_auth_host 00:27:45.738 ************************************ 00:27:45.738 05:52:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:45.738 05:52:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:45.738 05:52:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:45.738 05:52:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.738 05:52:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.738 ************************************ 00:27:45.738 START TEST nvmf_digest 00:27:45.738 ************************************ 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:45.739 * Looking for test storage... 00:27:45.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:45.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.739 --rc genhtml_branch_coverage=1 00:27:45.739 --rc genhtml_function_coverage=1 00:27:45.739 --rc genhtml_legend=1 00:27:45.739 --rc geninfo_all_blocks=1 00:27:45.739 --rc geninfo_unexecuted_blocks=1 00:27:45.739 00:27:45.739 ' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:45.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.739 --rc genhtml_branch_coverage=1 00:27:45.739 --rc genhtml_function_coverage=1 00:27:45.739 --rc genhtml_legend=1 00:27:45.739 --rc geninfo_all_blocks=1 00:27:45.739 --rc geninfo_unexecuted_blocks=1 00:27:45.739 00:27:45.739 ' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:45.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.739 --rc genhtml_branch_coverage=1 00:27:45.739 --rc genhtml_function_coverage=1 00:27:45.739 --rc genhtml_legend=1 00:27:45.739 --rc geninfo_all_blocks=1 00:27:45.739 --rc geninfo_unexecuted_blocks=1 00:27:45.739 00:27:45.739 ' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:45.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.739 --rc genhtml_branch_coverage=1 00:27:45.739 --rc genhtml_function_coverage=1 00:27:45.739 --rc genhtml_legend=1 00:27:45.739 --rc geninfo_all_blocks=1 00:27:45.739 --rc geninfo_unexecuted_blocks=1 00:27:45.739 00:27:45.739 ' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:45.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:45.739 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.740 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.740 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.740 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:45.740 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:45.740 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.740 05:52:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.310 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:52.311 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:52.311 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:52.311 Found net devices under 0000:af:00.0: cvl_0_0 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:52.311 Found net devices under 0000:af:00.1: cvl_0_1 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:27:52.311 00:27:52.311 --- 10.0.0.2 ping statistics --- 00:27:52.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.311 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:27:52.311 00:27:52.311 --- 10.0.0.1 ping statistics --- 00:27:52.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.311 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:52.311 ************************************ 00:27:52.311 START TEST nvmf_digest_clean 00:27:52.311 ************************************ 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1336435 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1336435 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1336435 ']' 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.311 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.311 [2024-12-10 05:52:39.422733] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:27:52.312 [2024-12-10 05:52:39.422772] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.312 [2024-12-10 05:52:39.501755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.312 [2024-12-10 05:52:39.540420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.312 [2024-12-10 05:52:39.540453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.312 [2024-12-10 05:52:39.540460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.312 [2024-12-10 05:52:39.540466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.312 [2024-12-10 05:52:39.540471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.312 [2024-12-10 05:52:39.540951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.312 null0 00:27:52.312 [2024-12-10 05:52:39.695621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.312 [2024-12-10 05:52:39.719805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1336460 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1336460 /var/tmp/bperf.sock 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1336460 ']' 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:52.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.312 [2024-12-10 05:52:39.772828] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:27:52.312 [2024-12-10 05:52:39.772869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336460 ] 00:27:52.312 [2024-12-10 05:52:39.846899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.312 [2024-12-10 05:52:39.887000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:52.312 05:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:52.312 05:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.312 05:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.880 nvme0n1 00:27:52.880 05:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:52.880 05:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:52.880 Running I/O for 2 seconds... 00:27:54.858 24416.00 IOPS, 95.38 MiB/s [2024-12-10T04:52:42.754Z] 25062.50 IOPS, 97.90 MiB/s 00:27:54.858 Latency(us) 00:27:54.858 [2024-12-10T04:52:42.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.858 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:54.858 nvme0n1 : 2.00 25072.17 97.94 0.00 0.00 5100.62 2527.82 17975.59 00:27:54.858 [2024-12-10T04:52:42.754Z] =================================================================================================================== 00:27:54.858 [2024-12-10T04:52:42.754Z] Total : 25072.17 97.94 0.00 0.00 5100.62 2527.82 17975.59 00:27:54.858 { 00:27:54.858 "results": [ 00:27:54.858 { 00:27:54.858 "job": "nvme0n1", 00:27:54.858 "core_mask": "0x2", 00:27:54.858 "workload": "randread", 00:27:54.858 "status": "finished", 00:27:54.858 "queue_depth": 128, 00:27:54.858 "io_size": 4096, 00:27:54.858 "runtime": 2.004334, 00:27:54.858 "iops": 25072.168610620785, 00:27:54.858 "mibps": 97.93815863523744, 00:27:54.858 "io_failed": 0, 00:27:54.858 "io_timeout": 0, 00:27:54.858 "avg_latency_us": 5100.616994882087, 00:27:54.858 "min_latency_us": 2527.8171428571427, 00:27:54.858 "max_latency_us": 17975.588571428572 00:27:54.858 } 00:27:54.858 ], 00:27:54.858 "core_count": 1 00:27:54.858 } 00:27:54.858 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:54.858 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:54.858 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:54.858 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:54.858 | select(.opcode=="crc32c") 00:27:54.858 | "\(.module_name) \(.executed)"' 00:27:54.858 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1336460 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1336460 ']' 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1336460 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1336460 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1336460' 00:27:55.117 killing process with pid 1336460 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1336460 00:27:55.117 Received shutdown signal, test time was about 2.000000 seconds 00:27:55.117 00:27:55.117 Latency(us) 00:27:55.117 [2024-12-10T04:52:43.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.117 [2024-12-10T04:52:43.013Z] =================================================================================================================== 00:27:55.117 [2024-12-10T04:52:43.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:55.117 05:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1336460 00:27:55.376 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1336957 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1336957 /var/tmp/bperf.sock 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1336957 ']' 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:55.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.377 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.377 [2024-12-10 05:52:43.157807] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:27:55.377 [2024-12-10 05:52:43.157853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336957 ] 00:27:55.377 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:55.377 Zero copy mechanism will not be used. 00:27:55.377 [2024-12-10 05:52:43.233524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.636 [2024-12-10 05:52:43.272218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.636 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.636 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:55.636 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:55.636 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:55.636 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:55.894 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:55.894 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.152 nvme0n1 00:27:56.152 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:56.152 05:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:56.152 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:56.152 Zero copy mechanism will not be used. 00:27:56.152 Running I/O for 2 seconds... 00:27:58.467 5550.00 IOPS, 693.75 MiB/s [2024-12-10T04:52:46.363Z] 5809.00 IOPS, 726.12 MiB/s 00:27:58.467 Latency(us) 00:27:58.467 [2024-12-10T04:52:46.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.467 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:58.467 nvme0n1 : 2.00 5810.30 726.29 0.00 0.00 2751.03 667.06 9924.02 00:27:58.467 [2024-12-10T04:52:46.363Z] =================================================================================================================== 00:27:58.467 [2024-12-10T04:52:46.363Z] Total : 5810.30 726.29 0.00 0.00 2751.03 667.06 9924.02 00:27:58.467 { 00:27:58.467 "results": [ 00:27:58.467 { 00:27:58.467 "job": "nvme0n1", 00:27:58.467 "core_mask": "0x2", 00:27:58.467 "workload": "randread", 00:27:58.467 "status": "finished", 00:27:58.467 "queue_depth": 16, 00:27:58.467 "io_size": 131072, 00:27:58.467 "runtime": 2.002305, 00:27:58.467 "iops": 5810.303625072104, 00:27:58.467 "mibps": 726.287953134013, 00:27:58.467 "io_failed": 0, 00:27:58.467 "io_timeout": 0, 00:27:58.467 "avg_latency_us": 2751.034674394427, 00:27:58.467 "min_latency_us": 667.0628571428572, 00:27:58.467 "max_latency_us": 9924.022857142858 00:27:58.467 } 00:27:58.467 ], 00:27:58.467 "core_count": 1 00:27:58.467 } 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:58.467 | select(.opcode=="crc32c") 00:27:58.467 | "\(.module_name) \(.executed)"' 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1336957 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1336957 ']' 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1336957 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1336957 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1336957' 00:27:58.467 killing process with pid 1336957 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1336957 00:27:58.467 Received shutdown signal, test time was about 2.000000 seconds 00:27:58.467 00:27:58.467 Latency(us) 00:27:58.467 [2024-12-10T04:52:46.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.467 [2024-12-10T04:52:46.363Z] =================================================================================================================== 00:27:58.467 [2024-12-10T04:52:46.363Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.467 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1336957 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1337594 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1337594 /var/tmp/bperf.sock 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1337594 ']' 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:58.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.726 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.726 [2024-12-10 05:52:46.536207] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:27:58.726 [2024-12-10 05:52:46.536255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337594 ] 00:27:58.726 [2024-12-10 05:52:46.608416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.986 [2024-12-10 05:52:46.647292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.986 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.986 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:58.986 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:58.986 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:58.986 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:59.244 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.244 05:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.503 nvme0n1 00:27:59.503 05:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:59.503 05:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:59.762 Running I/O for 2 seconds... 00:28:01.638 27078.00 IOPS, 105.77 MiB/s [2024-12-10T04:52:49.534Z] 27211.00 IOPS, 106.29 MiB/s 00:28:01.638 Latency(us) 00:28:01.638 [2024-12-10T04:52:49.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.638 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:01.638 nvme0n1 : 2.01 27215.68 106.31 0.00 0.00 4695.56 3354.82 14854.83 00:28:01.638 [2024-12-10T04:52:49.534Z] =================================================================================================================== 00:28:01.638 [2024-12-10T04:52:49.534Z] Total : 27215.68 106.31 0.00 0.00 4695.56 3354.82 14854.83 00:28:01.638 { 00:28:01.638 "results": [ 00:28:01.638 { 00:28:01.638 "job": "nvme0n1", 00:28:01.638 "core_mask": "0x2", 00:28:01.638 "workload": "randwrite", 00:28:01.638 "status": "finished", 00:28:01.638 "queue_depth": 128, 00:28:01.638 "io_size": 4096, 00:28:01.638 "runtime": 2.005829, 00:28:01.638 "iops": 27215.679900928742, 00:28:01.638 "mibps": 106.3112496130029, 00:28:01.638 "io_failed": 0, 00:28:01.638 "io_timeout": 0, 00:28:01.638 "avg_latency_us": 4695.555300517276, 00:28:01.638 "min_latency_us": 3354.8190476190475, 00:28:01.638 "max_latency_us": 14854.826666666666 00:28:01.638 } 00:28:01.638 ], 00:28:01.638 "core_count": 1 00:28:01.638 } 00:28:01.638 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:01.638 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:01.638 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:01.638 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:01.638 | select(.opcode=="crc32c") 00:28:01.638 | "\(.module_name) \(.executed)"' 00:28:01.638 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1337594 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1337594 ']' 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1337594 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1337594 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1337594' 00:28:01.897 killing process with pid 1337594 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1337594 00:28:01.897 Received shutdown signal, test time was about 2.000000 seconds 00:28:01.897 00:28:01.897 Latency(us) 00:28:01.897 [2024-12-10T04:52:49.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.897 [2024-12-10T04:52:49.793Z] =================================================================================================================== 00:28:01.897 [2024-12-10T04:52:49.793Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:01.897 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1337594 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1338062 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1338062 /var/tmp/bperf.sock 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1338062 ']' 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:02.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.156 05:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:02.156 [2024-12-10 05:52:49.948711] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:02.156 [2024-12-10 05:52:49.948762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338062 ] 00:28:02.156 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:02.156 Zero copy mechanism will not be used. 00:28:02.156 [2024-12-10 05:52:50.024552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.415 [2024-12-10 05:52:50.068524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.415 05:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.415 05:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:02.415 05:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:02.415 05:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:02.415 05:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:02.674 05:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.674 05:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.933 nvme0n1 00:28:03.192 05:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:03.192 05:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:03.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:03.192 Zero copy mechanism will not be used. 00:28:03.192 Running I/O for 2 seconds... 00:28:05.064 6183.00 IOPS, 772.88 MiB/s [2024-12-10T04:52:52.960Z] 6465.00 IOPS, 808.12 MiB/s 00:28:05.064 Latency(us) 00:28:05.064 [2024-12-10T04:52:52.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.064 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:05.064 nvme0n1 : 2.00 6462.87 807.86 0.00 0.00 2471.42 1950.48 5679.79 00:28:05.065 [2024-12-10T04:52:52.961Z] =================================================================================================================== 00:28:05.065 [2024-12-10T04:52:52.961Z] Total : 6462.87 807.86 0.00 0.00 2471.42 1950.48 5679.79 00:28:05.065 { 00:28:05.065 "results": [ 00:28:05.065 { 00:28:05.065 "job": "nvme0n1", 00:28:05.065 "core_mask": "0x2", 00:28:05.065 "workload": "randwrite", 00:28:05.065 "status": "finished", 00:28:05.065 "queue_depth": 16, 00:28:05.065 "io_size": 131072, 00:28:05.065 "runtime": 2.003753, 00:28:05.065 "iops": 6462.87241990405, 00:28:05.065 "mibps": 807.8590524880062, 00:28:05.065 "io_failed": 0, 00:28:05.065 "io_timeout": 0, 00:28:05.065 "avg_latency_us": 2471.4156972972974, 00:28:05.065 "min_latency_us": 1950.4761904761904, 00:28:05.065 "max_latency_us": 5679.786666666667 00:28:05.065 } 00:28:05.065 ], 00:28:05.065 "core_count": 1 00:28:05.065 } 00:28:05.324 05:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:05.324 05:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:05.324 05:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:05.324 05:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:05.324 | select(.opcode=="crc32c") 00:28:05.324 | "\(.module_name) \(.executed)"' 00:28:05.324 05:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:05.324 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:05.324 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:05.324 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:05.324 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:05.324 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1338062 00:28:05.324 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1338062 ']' 00:28:05.324 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1338062 00:28:05.324 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:05.324 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.324 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1338062 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1338062' 00:28:05.583 killing process with pid 1338062 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1338062 00:28:05.583 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.583 00:28:05.583 Latency(us) 00:28:05.583 [2024-12-10T04:52:53.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.583 [2024-12-10T04:52:53.479Z] =================================================================================================================== 00:28:05.583 [2024-12-10T04:52:53.479Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1338062 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1336435 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1336435 ']' 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1336435 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1336435 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1336435' 00:28:05.583 killing process with pid 1336435 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1336435 00:28:05.583 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1336435 00:28:05.842 00:28:05.842 real 0m14.239s 00:28:05.842 user 0m27.325s 00:28:05.842 sys 0m4.629s 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.842 ************************************ 00:28:05.842 END TEST nvmf_digest_clean 00:28:05.842 ************************************ 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.842 ************************************ 00:28:05.842 START TEST nvmf_digest_error 00:28:05.842 ************************************ 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1338754 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1338754 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1338754 ']' 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:05.842 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.102 [2024-12-10 05:52:53.740933] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:06.102 [2024-12-10 05:52:53.740976] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.102 [2024-12-10 05:52:53.820212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.102 [2024-12-10 05:52:53.858802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.102 [2024-12-10 05:52:53.858835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.102 [2024-12-10 05:52:53.858842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.102 [2024-12-10 05:52:53.858848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.102 [2024-12-10 05:52:53.858853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.102 [2024-12-10 05:52:53.859341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.102 [2024-12-10 05:52:53.923771] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.102 05:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.361 null0 00:28:06.361 [2024-12-10 05:52:54.014477] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.361 [2024-12-10 05:52:54.038666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1338784 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1338784 /var/tmp/bperf.sock 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1338784 ']' 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.361 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.361 [2024-12-10 05:52:54.090865] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:06.361 [2024-12-10 05:52:54.090906] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338784 ] 00:28:06.361 [2024-12-10 05:52:54.164858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.361 [2024-12-10 05:52:54.205356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.620 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.620 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:06.620 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:06.620 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:06.620 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:06.620 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.620 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.620 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.620 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.620 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.188 nvme0n1 00:28:07.188 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:07.188 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.188 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.188 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.188 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:07.188 05:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.188 Running I/O for 2 seconds... 00:28:07.188 [2024-12-10 05:52:55.041130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.188 [2024-12-10 05:52:55.041162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.188 [2024-12-10 05:52:55.041178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.188 [2024-12-10 05:52:55.049485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.188 [2024-12-10 05:52:55.049507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.188 [2024-12-10 05:52:55.049516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.188 [2024-12-10 05:52:55.061509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.188 [2024-12-10 05:52:55.061530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.188 [2024-12-10 05:52:55.061538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.188 [2024-12-10 05:52:55.072354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.188 [2024-12-10 05:52:55.072375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.188 [2024-12-10 05:52:55.072383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.447 [2024-12-10 05:52:55.080813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.447 [2024-12-10 05:52:55.080834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.447 [2024-12-10 05:52:55.080843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.447 [2024-12-10 05:52:55.091162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.447 [2024-12-10 05:52:55.091187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.447 [2024-12-10 05:52:55.091195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.447 [2024-12-10 05:52:55.100178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.447 [2024-12-10 05:52:55.100199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.447 [2024-12-10 05:52:55.100207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.447 [2024-12-10 05:52:55.111687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.447 [2024-12-10 05:52:55.111707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.447 [2024-12-10 05:52:55.111715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.447 [2024-12-10 05:52:55.120003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.447 [2024-12-10 05:52:55.120023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.447 [2024-12-10 05:52:55.120035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.447 [2024-12-10 05:52:55.132038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.447 [2024-12-10 05:52:55.132058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.447 [2024-12-10 05:52:55.132067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.447 [2024-12-10 05:52:55.142800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.447 [2024-12-10 05:52:55.142820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.447 [2024-12-10 05:52:55.142828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.447 [2024-12-10 05:52:55.154774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.447 [2024-12-10 05:52:55.154794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.447 [2024-12-10 05:52:55.154802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.447 [2024-12-10 05:52:55.163457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.447 [2024-12-10 05:52:55.163477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.447 [2024-12-10 05:52:55.163485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.447 [2024-12-10 05:52:55.172754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.447 [2024-12-10 05:52:55.172777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.172785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.185496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.185517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.185525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.194054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.194074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.194082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.205011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.205030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.205038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.217589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.217608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.217616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.228456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.228476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.228484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.241789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.241809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.241816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.249828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.249847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.249855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.262088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.262108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.262117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.270279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.270314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.270322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.282281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.282301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.282308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.294930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.294954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.294962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.305305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.305324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.305335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.314411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.314430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.314438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.326337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.326357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.326364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.448 [2024-12-10 05:52:55.337832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.448 [2024-12-10 05:52:55.337851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.448 [2024-12-10 05:52:55.337859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.350426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.350445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.350453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.358868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.358888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.358895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.371206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.371226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.371234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.382255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.382277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.382284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.390736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.390755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.390764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.401970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.401994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.402002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.410259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.410279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.410287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.421624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.421644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.421652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.432528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.432548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.432556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.442659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.442679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.442687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.450878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.450897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.450905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.460920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.460940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.460948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.469906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.469925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.707 [2024-12-10 05:52:55.469932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.707 [2024-12-10 05:52:55.479383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.707 [2024-12-10 05:52:55.479402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.479409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.708 [2024-12-10 05:52:55.488498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.708 [2024-12-10 05:52:55.488518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.488526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.708 [2024-12-10 05:52:55.498113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.708 [2024-12-10 05:52:55.498133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.498141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.708 [2024-12-10 05:52:55.506484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.708 [2024-12-10 05:52:55.506503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.506511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.708 [2024-12-10 05:52:55.516305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.708 [2024-12-10 05:52:55.516324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.516332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.708 [2024-12-10 05:52:55.526250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.708 [2024-12-10 05:52:55.526270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.526282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.708 [2024-12-10 05:52:55.534897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.708 [2024-12-10 05:52:55.534917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.534924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.708 [2024-12-10 05:52:55.547599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.708 [2024-12-10 05:52:55.547619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.547627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.708 [2024-12-10 05:52:55.559289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.708 [2024-12-10 05:52:55.559308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.559316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.708 [2024-12-10 05:52:55.568627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.708 [2024-12-10 05:52:55.568647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.568658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.708 [2024-12-10 05:52:55.580256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.708 [2024-12-10 05:52:55.580275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.580283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.708 [2024-12-10 05:52:55.592575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.708 [2024-12-10 05:52:55.592594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.708 [2024-12-10 05:52:55.592601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.605193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.605212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.605221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.613802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.613821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.613829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.625132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.625152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.625160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.637538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.637558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.637565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.647615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.647634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.647642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.656210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.656230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.656237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.667359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.667384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.667392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.679645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.679667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.679675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.691293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.691314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.691321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.702820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.702842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.702849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.711068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.711089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.711097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.721778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.721799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.721807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.732873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.732894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.732903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.741776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.741796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.741804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.750966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.750986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.750994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.760902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.760922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.760930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.770094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.770113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.770121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.779495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.779516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.967 [2024-12-10 05:52:55.779524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.967 [2024-12-10 05:52:55.787674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.967 [2024-12-10 05:52:55.787694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.968 [2024-12-10 05:52:55.787702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.968 [2024-12-10 05:52:55.797844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.968 [2024-12-10 05:52:55.797864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.968 [2024-12-10 05:52:55.797872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.968 [2024-12-10 05:52:55.808016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.968 [2024-12-10 05:52:55.808036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.968 [2024-12-10 05:52:55.808044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.968 [2024-12-10 05:52:55.816650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.968 [2024-12-10 05:52:55.816669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.968 [2024-12-10 05:52:55.816676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.968 [2024-12-10 05:52:55.826448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.968 [2024-12-10 05:52:55.826468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.968 [2024-12-10 05:52:55.826476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.968 [2024-12-10 05:52:55.836736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.968 [2024-12-10 05:52:55.836757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.968 [2024-12-10 05:52:55.836768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.968 [2024-12-10 05:52:55.845296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.968 [2024-12-10 05:52:55.845317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.968 [2024-12-10 05:52:55.845325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.968 [2024-12-10 05:52:55.856744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:07.968 [2024-12-10 05:52:55.856765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.968 [2024-12-10 05:52:55.856773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.226 [2024-12-10 05:52:55.869088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.226 [2024-12-10 05:52:55.869110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-12-10 05:52:55.869118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.226 [2024-12-10 05:52:55.881563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.226 [2024-12-10 05:52:55.881585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-12-10 05:52:55.881593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.226 [2024-12-10 05:52:55.893896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.226 [2024-12-10 05:52:55.893917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-12-10 05:52:55.893925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.226 [2024-12-10 05:52:55.903849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.226 [2024-12-10 05:52:55.903868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-12-10 05:52:55.903877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.226 [2024-12-10 05:52:55.912560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.226 [2024-12-10 05:52:55.912580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-12-10 05:52:55.912588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.226 [2024-12-10 05:52:55.923465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:55.923486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:55.923493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:55.934253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:55.934272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:55.934280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:55.942517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:55.942536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:55.942544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:55.953600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:55.953620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:55.953628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:55.962267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:55.962287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:55.962294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:55.971124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:55.971143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:55.971151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:55.981534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:55.981554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:55.981562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:55.990226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:55.990245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:55.990253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:55.999154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:55.999180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:55.999187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:56.008155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:56.008182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:56.008194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:56.017448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:56.017467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:56.017475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 24710.00 IOPS, 96.52 MiB/s [2024-12-10T04:52:56.123Z] [2024-12-10 05:52:56.031288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:56.031308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:56.031316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:56.041906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:56.041926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:56.041934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:56.050619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:56.050638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:56.050646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:56.061097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:56.061117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:56.061125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:56.070444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:56.070464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:56.070472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:56.080374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:56.080393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:56.080402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:56.089502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:56.089521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:56.089529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:56.100907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:56.100930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:56.100938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.227 [2024-12-10 05:52:56.112324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.227 [2024-12-10 05:52:56.112344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-12-10 05:52:56.112352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.486 [2024-12-10 05:52:56.122407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.486 [2024-12-10 05:52:56.122426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.486 [2024-12-10 05:52:56.122434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.486 [2024-12-10 05:52:56.132638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.486 [2024-12-10 05:52:56.132657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.486 [2024-12-10 05:52:56.132665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.486 [2024-12-10 05:52:56.141094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.486 [2024-12-10 05:52:56.141113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.486 [2024-12-10 05:52:56.141121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.486 [2024-12-10 05:52:56.153443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.486 [2024-12-10 05:52:56.153462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.486 [2024-12-10 05:52:56.153470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.486 [2024-12-10 05:52:56.165497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.486 [2024-12-10 05:52:56.165516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.486 [2024-12-10 05:52:56.165523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.173423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.173442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.173449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.184235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.184254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.184262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.195104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.195123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.195131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.203943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.203962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.203970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.218971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.218990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.218998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.227826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.227845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.227853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.239901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.239921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.239929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.248074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.248093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.248101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.259915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.259934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.259942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.269195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.269214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.269222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.277226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.277245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.277255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.287262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.287282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.287289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.295956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.295975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.295983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.304844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.304863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.304871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.317177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.317198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.317207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.328245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.328265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.328273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.337829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.337848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.337856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.346398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.346417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.346424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.356626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.356645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.356653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.366105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.366127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.366135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.487 [2024-12-10 05:52:56.375003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.487 [2024-12-10 05:52:56.375024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.487 [2024-12-10 05:52:56.375031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.746 [2024-12-10 05:52:56.384413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.746 [2024-12-10 05:52:56.384433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.746 [2024-12-10 05:52:56.384441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.746 [2024-12-10 05:52:56.394651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.746 [2024-12-10 05:52:56.394670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.746 [2024-12-10 05:52:56.394677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.746 [2024-12-10 05:52:56.402829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.746 [2024-12-10 05:52:56.402848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.746 [2024-12-10 05:52:56.402856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.746 [2024-12-10 05:52:56.412525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.746 [2024-12-10 05:52:56.412544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.746 [2024-12-10 05:52:56.412552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.746 [2024-12-10 05:52:56.421919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.746 [2024-12-10 05:52:56.421938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.421945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.431036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.431054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.431062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.439929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.439948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.439955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.449130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.449149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.449158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.458183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.458202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.458210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.470140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.470160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.470172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.482090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.482110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.482118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.493024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.493045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.493053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.503971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.503991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.504000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.513067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.513087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.513095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.522134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.522154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.522163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.531230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.531249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.531260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.542084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.542103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.542111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.550875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.550894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.550902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.562459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.562479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.562487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.573810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.573830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.573838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.584620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.584639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.584646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.597071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.597090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.597098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.606027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.606046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.606054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.617144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.617164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.617179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.625372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.625391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.625400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.747 [2024-12-10 05:52:56.636952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:08.747 [2024-12-10 05:52:56.636972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.747 [2024-12-10 05:52:56.636980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.006 [2024-12-10 05:52:56.647405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.006 [2024-12-10 05:52:56.647425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.647432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.658062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.658081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.658088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.666727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.666746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.666753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.678876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.678896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.678904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.688475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.688494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.688502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.697093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.697112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.697120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.708184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.708203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.708214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.718231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.718251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.718259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.726471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.726490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.726497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.736660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.736680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.736687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.748905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.748924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.748932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.759580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.759599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.759607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.769963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.769982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.769990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.778463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.778482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.778489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.789442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.789462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.789469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.801036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.801061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.801070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.809355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.809374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.809381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.821371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.821391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.821398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.832153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.832176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.832184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.841016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.841035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.841043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.852490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.852510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.852518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.862277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.862297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.862305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.874014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.874034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.874042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.883508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.883529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.883536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.007 [2024-12-10 05:52:56.894411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.007 [2024-12-10 05:52:56.894431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.007 [2024-12-10 05:52:56.894440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:56.902732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:56.902754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:56.902762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:56.913883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:56.913903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:56.913911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:56.922875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:56.922894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:56.922901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:56.931352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:56.931370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:56.931378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:56.940620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:56.940639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:56.940647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:56.950769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:56.950788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:56.950796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:56.960643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:56.960661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:56.960669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:56.968127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:56.968146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:56.968157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:56.979849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:56.979869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:56.979877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:56.992011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:56.992030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:56.992038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:57.004376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:57.004396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:57.004404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:57.017020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:57.017039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:57.017047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 [2024-12-10 05:52:57.028150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6cae0) 00:28:09.267 [2024-12-10 05:52:57.028175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.267 [2024-12-10 05:52:57.028183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.267 24915.00 IOPS, 97.32 MiB/s 00:28:09.267 Latency(us) 00:28:09.267 [2024-12-10T04:52:57.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.267 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:09.267 nvme0n1 : 2.00 24933.94 97.40 0.00 0.00 5128.50 2715.06 16477.62 00:28:09.267 [2024-12-10T04:52:57.163Z] =================================================================================================================== 00:28:09.267 [2024-12-10T04:52:57.163Z] Total : 24933.94 97.40 0.00 0.00 5128.50 2715.06 16477.62 00:28:09.267 { 00:28:09.267 "results": [ 00:28:09.267 { 00:28:09.267 "job": "nvme0n1", 00:28:09.267 "core_mask": "0x2", 00:28:09.267 "workload": "randread", 00:28:09.267 "status": "finished", 00:28:09.267 "queue_depth": 128, 00:28:09.267 "io_size": 4096, 00:28:09.267 "runtime": 2.003614, 00:28:09.267 "iops": 24933.944362536895, 00:28:09.267 "mibps": 97.39822016615975, 00:28:09.267 "io_failed": 0, 00:28:09.267 "io_timeout": 0, 00:28:09.267 "avg_latency_us": 5128.495062099783, 00:28:09.267 "min_latency_us": 2715.062857142857, 00:28:09.267 "max_latency_us": 16477.62285714286 00:28:09.267 } 00:28:09.267 ], 00:28:09.267 "core_count": 1 00:28:09.267 } 00:28:09.267 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:09.267 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:09.267 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:09.267 | .driver_specific 00:28:09.267 | .nvme_error 00:28:09.267 | .status_code 00:28:09.267 | .command_transient_transport_error' 00:28:09.267 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:09.526 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:28:09.526 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1338784 00:28:09.526 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1338784 ']' 00:28:09.526 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1338784 00:28:09.526 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:09.526 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.526 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1338784 00:28:09.526 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:09.526 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:09.526 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1338784' 00:28:09.526 killing process with pid 1338784 00:28:09.526 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1338784 00:28:09.526 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.526 00:28:09.526 Latency(us) 00:28:09.526 [2024-12-10T04:52:57.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.526 [2024-12-10T04:52:57.422Z] =================================================================================================================== 00:28:09.526 [2024-12-10T04:52:57.422Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.527 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1338784 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1339449 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1339449 /var/tmp/bperf.sock 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1339449 ']' 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.785 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.785 [2024-12-10 05:52:57.535349] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:09.785 [2024-12-10 05:52:57.535396] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339449 ] 00:28:09.785 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:09.785 Zero copy mechanism will not be used. 00:28:09.786 [2024-12-10 05:52:57.609290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.786 [2024-12-10 05:52:57.647906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.044 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.044 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:10.044 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:10.044 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:10.303 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:10.303 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.303 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.303 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.303 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.303 05:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.561 nvme0n1 00:28:10.561 05:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:10.561 05:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.561 05:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.561 05:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.561 05:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:10.561 05:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:10.821 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:10.821 Zero copy mechanism will not be used. 00:28:10.821 Running I/O for 2 seconds... 00:28:10.821 [2024-12-10 05:52:58.480532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.480566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.480577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.484938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.484962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.484971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.489276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.489298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.489306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.493612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.493634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.493642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.497823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.497845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.497853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.502079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.502101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.502109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.506644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.506667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.506676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.511125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.511148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.511157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.515533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.515554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.515562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.519935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.519957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.519965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.524462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.524483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.524496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.528889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.528910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.528918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.533245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.533266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.533275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.537465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.537486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.537494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.541746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.541766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.541775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.546008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.546031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.546039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.550268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.550290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.550298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.554538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.554559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.821 [2024-12-10 05:52:58.554567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.821 [2024-12-10 05:52:58.558854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.821 [2024-12-10 05:52:58.558875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.558882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.564255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.564281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.564289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.570905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.570927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.570936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.577708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.577731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.577739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.583357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.583380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.583388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.590223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.590244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.590253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.596909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.596929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.596938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.600415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.600437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.600445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.606900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.606922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.606930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.613688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.613710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.613718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.619363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.619385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.619394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.624493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.624514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.624522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.629468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.629490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.629497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.634550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.634570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.634578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.639575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.639597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.639604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.644693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.644714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.644722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.649989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.650010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.650018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.655235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.655255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.655263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.660985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.661005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.661020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.666458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.666480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.666488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.672292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.672315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.672323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.678977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.678999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.679007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.685988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.686012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.686022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.693484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.693507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.693516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.700849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.700871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.700880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.822 [2024-12-10 05:52:58.708844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:10.822 [2024-12-10 05:52:58.708867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.822 [2024-12-10 05:52:58.708876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.082 [2024-12-10 05:52:58.717089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.082 [2024-12-10 05:52:58.717112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.082 [2024-12-10 05:52:58.717121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.082 [2024-12-10 05:52:58.724391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.082 [2024-12-10 05:52:58.724418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.082 [2024-12-10 05:52:58.724437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.082 [2024-12-10 05:52:58.731683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.082 [2024-12-10 05:52:58.731705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.082 [2024-12-10 05:52:58.731713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.082 [2024-12-10 05:52:58.738820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.082 [2024-12-10 05:52:58.738841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.082 [2024-12-10 05:52:58.738854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.082 [2024-12-10 05:52:58.746008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.082 [2024-12-10 05:52:58.746030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.082 [2024-12-10 05:52:58.746038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.082 [2024-12-10 05:52:58.753471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.082 [2024-12-10 05:52:58.753493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.082 [2024-12-10 05:52:58.753501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.082 [2024-12-10 05:52:58.760886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.082 [2024-12-10 05:52:58.760908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.082 [2024-12-10 05:52:58.760916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.082 [2024-12-10 05:52:58.767249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.082 [2024-12-10 05:52:58.767271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.082 [2024-12-10 05:52:58.767279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.082 [2024-12-10 05:52:58.774444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.774466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.774474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.782296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.782318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.782327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.789922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.789944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.789952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.797133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.797154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.797162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.803345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.803366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.803374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.808731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.808752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.808760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.813920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.813940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.813949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.819101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.819122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.819130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.824659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.824680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.824688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.831525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.831547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.831554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.838776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.838802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.838811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.846550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.846572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.846580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.853757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.853779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.853788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.861379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.861401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.861408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.869291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.869313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.869321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.876647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.876668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.876676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.884159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.884187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.884195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.891394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.891416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.891423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.898867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.898887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.898895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.906342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.906363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.906371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.913991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.914012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.914020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.921337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.921358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.921366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.929007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.929028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.929036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.936487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.936507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.936516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.942992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.943015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.943023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.948283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.948306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.948314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.953488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.953510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.953519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.083 [2024-12-10 05:52:58.958669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.083 [2024-12-10 05:52:58.958690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.083 [2024-12-10 05:52:58.958702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.084 [2024-12-10 05:52:58.963907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.084 [2024-12-10 05:52:58.963927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.084 [2024-12-10 05:52:58.963935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.084 [2024-12-10 05:52:58.969136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.084 [2024-12-10 05:52:58.969157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.084 [2024-12-10 05:52:58.969165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.343 [2024-12-10 05:52:58.974383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.343 [2024-12-10 05:52:58.974404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.343 [2024-12-10 05:52:58.974411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.343 [2024-12-10 05:52:58.979613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.343 [2024-12-10 05:52:58.979633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.343 [2024-12-10 05:52:58.979641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.343 [2024-12-10 05:52:58.984737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.343 [2024-12-10 05:52:58.984757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.343 [2024-12-10 05:52:58.984765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.343 [2024-12-10 05:52:58.989926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.343 [2024-12-10 05:52:58.989947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.343 [2024-12-10 05:52:58.989955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.343 [2024-12-10 05:52:58.995144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.343 [2024-12-10 05:52:58.995170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.343 [2024-12-10 05:52:58.995178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.000350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.000371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.000378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.005566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.005590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.005598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.010768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.010788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.010796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.015956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.015978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.015986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.021081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.021102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.021109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.026254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.026275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.026283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.031450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.031470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.031478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.036605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.036625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.036633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.041828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.041848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.041856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.047029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.047050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.047058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.052124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.052145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.052152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.057191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.057211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.057219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.062296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.062316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.062324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.067476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.067496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.067503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.072568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.072588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.072595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.077685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.077706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.077714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.082832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.082853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.082861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.088019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.088041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.088049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.093218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.093240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.093251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.098235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.098255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.098263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.103276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.103297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.103304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.108381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.108401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.108408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.113534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.113554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.113562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.118689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.118708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.118716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.123803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.123824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.123831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.128909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.128930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.128937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.134035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.344 [2024-12-10 05:52:59.134055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.344 [2024-12-10 05:52:59.134062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.344 [2024-12-10 05:52:59.139132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.139152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.139160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.144304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.144324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.144332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.149452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.149472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.149480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.154560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.154581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.154588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.159717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.159737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.159745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.164841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.164861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.164869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.169921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.169941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.169949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.175030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.175048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.175056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.180100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.180120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.180132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.185294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.185316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.185323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.190698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.190719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.190726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.196060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.196081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.196088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.201509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.201529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.201537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.206937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.206957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.206965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.212283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.212302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.212310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.217611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.217631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.217639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.222916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.222936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.222944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.228189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.228212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.228220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.345 [2024-12-10 05:52:59.233435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.345 [2024-12-10 05:52:59.233455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.345 [2024-12-10 05:52:59.233463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.238652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.238672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.605 [2024-12-10 05:52:59.238680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.243840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.243860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.605 [2024-12-10 05:52:59.243868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.249125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.249146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.605 [2024-12-10 05:52:59.249154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.254553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.254574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.605 [2024-12-10 05:52:59.254582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.259883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.259905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.605 [2024-12-10 05:52:59.259912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.265259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.265280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.605 [2024-12-10 05:52:59.265288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.270564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.270584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.605 [2024-12-10 05:52:59.270592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.275900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.275921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.605 [2024-12-10 05:52:59.275928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.281159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.281184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.605 [2024-12-10 05:52:59.281192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.286327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.286347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.605 [2024-12-10 05:52:59.286355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.291718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.291739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.605 [2024-12-10 05:52:59.291747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.605 [2024-12-10 05:52:59.296854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.605 [2024-12-10 05:52:59.296874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.296882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.302223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.302243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.302251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.307662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.307682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.307690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.313146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.313171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.313179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.318479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.318499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.318511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.323726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.323746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.323754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.329137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.329157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.329165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.334464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.334484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.334492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.339741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.339761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.339770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.344938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.344958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.344965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.349849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.349870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.349878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.355065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.355086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.355093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.360301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.360320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.360328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.365701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.365725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.365732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.370965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.370986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.370993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.376245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.376266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.376274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.381524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.381544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.381552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.386763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.386784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.386791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.391971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.391991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.391999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.397200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.397220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.397228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.606 [2024-12-10 05:52:59.402452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.606 [2024-12-10 05:52:59.402473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.606 [2024-12-10 05:52:59.402481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.407801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.407822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.407829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.413138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.413159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.413172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.418173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.418193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.418201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.423894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.423915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.423922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.429124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.429145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.429152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.434508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.434528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.434536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.439806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.439827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.439834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.445185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.445206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.445214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.448727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.448747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.448755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.453129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.453153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.453160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.458314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.458334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.458341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.463487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.463507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.463514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.468525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.468546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.468554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.473821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.473842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.473850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.607 5537.00 IOPS, 692.12 MiB/s [2024-12-10T04:52:59.503Z] [2024-12-10 05:52:59.480477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.480497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.480505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.485764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.485785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.485792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.607 [2024-12-10 05:52:59.491047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.607 [2024-12-10 05:52:59.491068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.607 [2024-12-10 05:52:59.491076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.496487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.496509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.496516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.502108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.502130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.502138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.507491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.507511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.507519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.512846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.512866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.512873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.518119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.518139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.518147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.523534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.523554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.523561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.528830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.528850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.528858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.534213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.534233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.534240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.539465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.539485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.539493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.544805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.544826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.544838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.550154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.550180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.550188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.555438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.555459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.555467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.560589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.560610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.560617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.565938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.565959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.565966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.571199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.571219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.571226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.576470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.576490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.576497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.581797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.581818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.581826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.587079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.587099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.587107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.592383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.592407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.592414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.597849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.597869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.597877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.603263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.603283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.603291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.608658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.608678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.608686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.613906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.613926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.613933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.619109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.619129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.619137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.624267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.624288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.624295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.867 [2024-12-10 05:52:59.629532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.867 [2024-12-10 05:52:59.629552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.867 [2024-12-10 05:52:59.629560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.634859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.634879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.634886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.640270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.640290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.640298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.645573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.645594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.645601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.650899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.650919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.650926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.656337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.656357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.656366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.661728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.661748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.661756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.667146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.667171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.667179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.672575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.672595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.672603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.677846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.677865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.677873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.683109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.683133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.683140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.688552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.688573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.688581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.694451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.694472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.694480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.701468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.701490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.701498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.709338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.709360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.709368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.716838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.716859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.716867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.724242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.724263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.724271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.732047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.732069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.732078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.739985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.740006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.740015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.748031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.748053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.748061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.868 [2024-12-10 05:52:59.756427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:11.868 [2024-12-10 05:52:59.756449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.868 [2024-12-10 05:52:59.756458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.764660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.764683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.764691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.772389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.772411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.772420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.780363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.780385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.780393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.788176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.788198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.788205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.795845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.795866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.795874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.803340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.803363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.803371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.810530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.810551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.810563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.816267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.816288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.816296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.822777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.822797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.822805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.829400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.829421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.829429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.834846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.834868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.834876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.839956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.839975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.839983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.844893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.844914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.844922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.850231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.850251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.850259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.855878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.855899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.855907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.861144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.861173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.861181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.866417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.866438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.866445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.871670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.871692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.871700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.876960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.876981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.876989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.882225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.882246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.882254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.887589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.887610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.887618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.892809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.892830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.892837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.898094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.898116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.898123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.128 [2024-12-10 05:52:59.903304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.128 [2024-12-10 05:52:59.903324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.128 [2024-12-10 05:52:59.903332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.906291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.906312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.906320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.911308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.911328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.911336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.916424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.916446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.916454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.921656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.921677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.921684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.926830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.926851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.926859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.931991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.932011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.932019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.937216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.937236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.937243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.942344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.942364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.942372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.947456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.947476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.947486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.952698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.952719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.952727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.957842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.957863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.957871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.962960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.962982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.962989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.967865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.967887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.967895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.972996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.973017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.973024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.978085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.978106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.978113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.983261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.983281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.983289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.988419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.988440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.988447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.993596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.993616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.993624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:52:59.998807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:52:59.998828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:52:59.998836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:53:00.003946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:53:00.003969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:53:00.003978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:53:00.010245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:53:00.010301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:53:00.010324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.129 [2024-12-10 05:53:00.016114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.129 [2024-12-10 05:53:00.016141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.129 [2024-12-10 05:53:00.016154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.022050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.022079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.022094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.027836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.027894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.027916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.033960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.033988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.034002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.039740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.039771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.039807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.045473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.045502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.045516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.051735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.051761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.051772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.057453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.057477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.057486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.062660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.062683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.062691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.067873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.067896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.067904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.073567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.073593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.073604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.078914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.078936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.078945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.084326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.084348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.084357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.089756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.089783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.089791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.095149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.095178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.095187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.100491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.389 [2024-12-10 05:53:00.100513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.389 [2024-12-10 05:53:00.100521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.389 [2024-12-10 05:53:00.105849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.105870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.105879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.111198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.111218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.111227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.116472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.116494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.116502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.121833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.121854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.121863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.127199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.127220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.127228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.132587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.132609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.132617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.138049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.138070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.138078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.143352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.143373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.143381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.148736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.148757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.148765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.154034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.154055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.154063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.159425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.159446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.159454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.164871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.164892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.164901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.170527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.170548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.170556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.176089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.176111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.176119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.181672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.181694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.181706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.186982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.187004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.187012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.192371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.192392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.192400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.197802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.197822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.197831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.203162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.203190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.203198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.208576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.208598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.208606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.213928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.213949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.213957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.219314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.219336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.219344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.224765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.224788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.224799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.230144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.230176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.230184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.235498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.235519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.235527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.240954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.240976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.240984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.246365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.246387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.390 [2024-12-10 05:53:00.246394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.390 [2024-12-10 05:53:00.251628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.390 [2024-12-10 05:53:00.251649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.391 [2024-12-10 05:53:00.251658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.391 [2024-12-10 05:53:00.256986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.391 [2024-12-10 05:53:00.257007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.391 [2024-12-10 05:53:00.257015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.391 [2024-12-10 05:53:00.262491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.391 [2024-12-10 05:53:00.262512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.391 [2024-12-10 05:53:00.262521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.391 [2024-12-10 05:53:00.267867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.391 [2024-12-10 05:53:00.267889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.391 [2024-12-10 05:53:00.267896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.391 [2024-12-10 05:53:00.272949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.391 [2024-12-10 05:53:00.272969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.391 [2024-12-10 05:53:00.272981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.391 [2024-12-10 05:53:00.277897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.391 [2024-12-10 05:53:00.277918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.391 [2024-12-10 05:53:00.277925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.650 [2024-12-10 05:53:00.282933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.650 [2024-12-10 05:53:00.282954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.650 [2024-12-10 05:53:00.282962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.650 [2024-12-10 05:53:00.288015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.650 [2024-12-10 05:53:00.288036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.650 [2024-12-10 05:53:00.288044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.650 [2024-12-10 05:53:00.293173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.650 [2024-12-10 05:53:00.293193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.650 [2024-12-10 05:53:00.293200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.650 [2024-12-10 05:53:00.298348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.650 [2024-12-10 05:53:00.298368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.650 [2024-12-10 05:53:00.298376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.650 [2024-12-10 05:53:00.303519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.650 [2024-12-10 05:53:00.303539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.650 [2024-12-10 05:53:00.303547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.650 [2024-12-10 05:53:00.308676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.650 [2024-12-10 05:53:00.308696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.650 [2024-12-10 05:53:00.308704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.650 [2024-12-10 05:53:00.313833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.650 [2024-12-10 05:53:00.313853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.313862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.319920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.319945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.319954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.326113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.326134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.326142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.333159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.333187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.333195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.340617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.340639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.340647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.346991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.347012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.347020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.353433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.353455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.353463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.359233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.359255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.359263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.365917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.365939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.365946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.373546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.373567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.373576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.380442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.380464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.380473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.387038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.387059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.387068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.393517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.393538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.393547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.398796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.398817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.398825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.404029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.404049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.404057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.409400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.409421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.409429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.414631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.414652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.414660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.419854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.419874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.419882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.425064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.425085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.425096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.430376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.430397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.430405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.435642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.435663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.435671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.440954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.440975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.440983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.446116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.446136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.446143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.448973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.448993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.449001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.454083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.454103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.454112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.459340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.459361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.459370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.464287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.464308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.464316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.469574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.651 [2024-12-10 05:53:00.469598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.651 [2024-12-10 05:53:00.469606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:12.651 [2024-12-10 05:53:00.474814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.652 [2024-12-10 05:53:00.474835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-10 05:53:00.474843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:12.652 [2024-12-10 05:53:00.479957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c386a0) 00:28:12.652 [2024-12-10 05:53:00.479978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.652 [2024-12-10 05:53:00.479986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:12.652 5536.00 IOPS, 692.00 MiB/s 00:28:12.652 Latency(us) 00:28:12.652 [2024-12-10T04:53:00.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.652 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:12.652 nvme0n1 : 2.00 5535.71 691.96 0.00 0.00 2887.72 655.36 8426.06 00:28:12.652 [2024-12-10T04:53:00.548Z] =================================================================================================================== 00:28:12.652 [2024-12-10T04:53:00.548Z] Total : 5535.71 691.96 0.00 0.00 2887.72 655.36 8426.06 00:28:12.652 { 00:28:12.652 "results": [ 00:28:12.652 { 00:28:12.652 "job": "nvme0n1", 00:28:12.652 "core_mask": "0x2", 00:28:12.652 "workload": "randread", 00:28:12.652 "status": "finished", 00:28:12.652 "queue_depth": 16, 00:28:12.652 "io_size": 131072, 00:28:12.652 "runtime": 2.002995, 00:28:12.652 "iops": 5535.710273864887, 00:28:12.652 "mibps": 691.9637842331109, 00:28:12.652 "io_failed": 0, 00:28:12.652 "io_timeout": 0, 00:28:12.652 "avg_latency_us": 2887.719755376898, 00:28:12.652 "min_latency_us": 655.36, 00:28:12.652 "max_latency_us": 8426.057142857142 00:28:12.652 } 00:28:12.652 ], 00:28:12.652 "core_count": 1 00:28:12.652 } 00:28:12.652 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:12.652 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:12.652 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:12.652 | .driver_specific 00:28:12.652 | .nvme_error 00:28:12.652 | .status_code 00:28:12.652 | .command_transient_transport_error' 00:28:12.652 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 358 > 0 )) 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1339449 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1339449 ']' 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1339449 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1339449 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1339449' 00:28:12.911 killing process with pid 1339449 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1339449 00:28:12.911 Received shutdown signal, test time was about 2.000000 seconds 00:28:12.911 00:28:12.911 Latency(us) 00:28:12.911 [2024-12-10T04:53:00.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.911 [2024-12-10T04:53:00.807Z] =================================================================================================================== 00:28:12.911 [2024-12-10T04:53:00.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.911 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1339449 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1339914 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1339914 /var/tmp/bperf.sock 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1339914 ']' 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.170 05:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.170 [2024-12-10 05:53:00.972019] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:13.170 [2024-12-10 05:53:00.972067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339914 ] 00:28:13.170 [2024-12-10 05:53:01.046508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.428 [2024-12-10 05:53:01.087657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.428 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.428 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:13.428 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:13.428 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:13.685 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:13.685 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.685 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.685 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.685 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.685 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.944 nvme0n1 00:28:13.944 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:13.944 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.944 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.944 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.944 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:13.944 05:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:13.944 Running I/O for 2 seconds... 00:28:14.203 [2024-12-10 05:53:01.836584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee12d8 00:28:14.203 [2024-12-10 05:53:01.837574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.203 [2024-12-10 05:53:01.837601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:14.203 [2024-12-10 05:53:01.847465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee5220 00:28:14.203 [2024-12-10 05:53:01.849009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.203 [2024-12-10 05:53:01.849029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:14.203 [2024-12-10 05:53:01.854051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee2c28 00:28:14.203 [2024-12-10 05:53:01.854917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.203 [2024-12-10 05:53:01.854936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:14.203 [2024-12-10 05:53:01.865041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef57b0 00:28:14.203 [2024-12-10 05:53:01.866228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.203 [2024-12-10 05:53:01.866248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:14.203 [2024-12-10 05:53:01.874986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee1710 00:28:14.203 [2024-12-10 05:53:01.876424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.203 [2024-12-10 05:53:01.876443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:14.203 [2024-12-10 05:53:01.884176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef4f40 00:28:14.203 [2024-12-10 05:53:01.885620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.203 [2024-12-10 05:53:01.885638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:14.203 [2024-12-10 05:53:01.890488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016edf118 00:28:14.204 [2024-12-10 05:53:01.891182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.891200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:01.900529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef2510 00:28:14.204 [2024-12-10 05:53:01.901375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.901394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:01.910663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eee190 00:28:14.204 [2024-12-10 05:53:01.911975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.911994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:01.919605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:01.920639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.920657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:01.928575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:01.929644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.929662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:01.937574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:01.938633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.938650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:01.946555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:01.947631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.947650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:01.955536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:01.956641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.956660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:01.964510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:01.965582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.965600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:01.973447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:01.974524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.974541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:01.982493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:01.983578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.983597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:01.991486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:01.992599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:01.992618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:02.000456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:02.001509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:02.001527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:02.009357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:02.010414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:02.010432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:02.018290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:02.019356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:02.019374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:02.027246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:02.028376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:02.028393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:02.036269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:02.037383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:02.037404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:02.045203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:02.046307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:02.046324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:02.053609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef57b0 00:28:14.204 [2024-12-10 05:53:02.054708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:02.054725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:02.063055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee84c0 00:28:14.204 [2024-12-10 05:53:02.064255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:02.064274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:02.072192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.204 [2024-12-10 05:53:02.072954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:02.072972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:02.082507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee4de8 00:28:14.204 [2024-12-10 05:53:02.084048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:02.084066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:14.204 [2024-12-10 05:53:02.088944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef5be8 00:28:14.204 [2024-12-10 05:53:02.089640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.204 [2024-12-10 05:53:02.089659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:14.463 [2024-12-10 05:53:02.097636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee01f8 00:28:14.463 [2024-12-10 05:53:02.098335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.463 [2024-12-10 05:53:02.098353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:14.463 [2024-12-10 05:53:02.107777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee38d0 00:28:14.464 [2024-12-10 05:53:02.108609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.108627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.117144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee0ea0 00:28:14.464 [2024-12-10 05:53:02.118097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.118115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.126229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6890 00:28:14.464 [2024-12-10 05:53:02.127172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.127191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.135560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee4140 00:28:14.464 [2024-12-10 05:53:02.136619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.136638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.144688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eddc00 00:28:14.464 [2024-12-10 05:53:02.145778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.145797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.153713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eff3c8 00:28:14.464 [2024-12-10 05:53:02.154825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.154842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.162790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eee5c8 00:28:14.464 [2024-12-10 05:53:02.163923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.163942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.171948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef8a50 00:28:14.464 [2024-12-10 05:53:02.173031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.173049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.180941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efc560 00:28:14.464 [2024-12-10 05:53:02.182037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.182055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.189995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efd640 00:28:14.464 [2024-12-10 05:53:02.191088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.191107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.198982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eed920 00:28:14.464 [2024-12-10 05:53:02.200081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.200099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.207929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eec840 00:28:14.464 [2024-12-10 05:53:02.209031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.209049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.217103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eeb760 00:28:14.464 [2024-12-10 05:53:02.218202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.218220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.226028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef1868 00:28:14.464 [2024-12-10 05:53:02.227124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.227142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.234479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee1f80 00:28:14.464 [2024-12-10 05:53:02.235569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.235587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.242842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef2d80 00:28:14.464 [2024-12-10 05:53:02.243559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.243577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.251765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef0ff8 00:28:14.464 [2024-12-10 05:53:02.252487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.252505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.260735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee4de8 00:28:14.464 [2024-12-10 05:53:02.261477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.261495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.271995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efcdd0 00:28:14.464 [2024-12-10 05:53:02.273461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.273482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.281112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6890 00:28:14.464 [2024-12-10 05:53:02.282581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.282599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.289903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee6b70 00:28:14.464 [2024-12-10 05:53:02.291316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.291335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.296441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eea680 00:28:14.464 [2024-12-10 05:53:02.297133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.297151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.307654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef1ca0 00:28:14.464 [2024-12-10 05:53:02.308866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.308883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.316869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee73e0 00:28:14.464 [2024-12-10 05:53:02.317664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.317683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.327199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efb8b8 00:28:14.464 [2024-12-10 05:53:02.328810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.328828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.333852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef2510 00:28:14.464 [2024-12-10 05:53:02.334741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.334759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.342949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eef270 00:28:14.464 [2024-12-10 05:53:02.343839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.464 [2024-12-10 05:53:02.343857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:14.464 [2024-12-10 05:53:02.354042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efe720 00:28:14.724 [2024-12-10 05:53:02.355301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.355319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.362760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef8e88 00:28:14.724 [2024-12-10 05:53:02.364003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.364021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.372275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef7538 00:28:14.724 [2024-12-10 05:53:02.373653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.373671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.381716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eea680 00:28:14.724 [2024-12-10 05:53:02.383218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.383236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.388068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee1f80 00:28:14.724 [2024-12-10 05:53:02.388701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.388720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.397021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee1b48 00:28:14.724 [2024-12-10 05:53:02.397782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.397801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.406117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee7818 00:28:14.724 [2024-12-10 05:53:02.406896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.406914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.415508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef20d8 00:28:14.724 [2024-12-10 05:53:02.416252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.416271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.425787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee6300 00:28:14.724 [2024-12-10 05:53:02.427035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.427053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.435255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee3060 00:28:14.724 [2024-12-10 05:53:02.436606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.436623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.441790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee6300 00:28:14.724 [2024-12-10 05:53:02.442450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.442468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.451185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eea680 00:28:14.724 [2024-12-10 05:53:02.451920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.451939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.460392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef7100 00:28:14.724 [2024-12-10 05:53:02.461107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.461125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.470943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee3498 00:28:14.724 [2024-12-10 05:53:02.472045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.472063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.478287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee95a0 00:28:14.724 [2024-12-10 05:53:02.478848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.478866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.487492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee4140 00:28:14.724 [2024-12-10 05:53:02.488222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.488240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.496914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee01f8 00:28:14.724 [2024-12-10 05:53:02.497914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.497932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.505298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eef270 00:28:14.724 [2024-12-10 05:53:02.505859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.505880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.515194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efa7d8 00:28:14.724 [2024-12-10 05:53:02.516539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.516557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.523382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef0788 00:28:14.724 [2024-12-10 05:53:02.524149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.524171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.533702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee1f80 00:28:14.724 [2024-12-10 05:53:02.534925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.534943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.542022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef31b8 00:28:14.724 [2024-12-10 05:53:02.542820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.542838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.550851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efac10 00:28:14.724 [2024-12-10 05:53:02.551603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.551621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.560158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef7100 00:28:14.724 [2024-12-10 05:53:02.561241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.724 [2024-12-10 05:53:02.561259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:14.724 [2024-12-10 05:53:02.569552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee38d0 00:28:14.724 [2024-12-10 05:53:02.570813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.725 [2024-12-10 05:53:02.570831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:14.725 [2024-12-10 05:53:02.578630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef0350 00:28:14.725 [2024-12-10 05:53:02.579395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.725 [2024-12-10 05:53:02.579414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:14.725 [2024-12-10 05:53:02.587049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef3a28 00:28:14.725 [2024-12-10 05:53:02.587724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.725 [2024-12-10 05:53:02.587742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:14.725 [2024-12-10 05:53:02.595609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef5378 00:28:14.725 [2024-12-10 05:53:02.596285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.725 [2024-12-10 05:53:02.596303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:14.725 [2024-12-10 05:53:02.604335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef5be8 00:28:14.725 [2024-12-10 05:53:02.604802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.725 [2024-12-10 05:53:02.604820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.614797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee84c0 00:28:14.984 [2024-12-10 05:53:02.615938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.984 [2024-12-10 05:53:02.615955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.624215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6890 00:28:14.984 [2024-12-10 05:53:02.625262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.984 [2024-12-10 05:53:02.625280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.632686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef20d8 00:28:14.984 [2024-12-10 05:53:02.633827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.984 [2024-12-10 05:53:02.633846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.641059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eed0b0 00:28:14.984 [2024-12-10 05:53:02.641742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.984 [2024-12-10 05:53:02.641761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.650235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6020 00:28:14.984 [2024-12-10 05:53:02.650793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.984 [2024-12-10 05:53:02.650811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.658685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6020 00:28:14.984 [2024-12-10 05:53:02.659140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.984 [2024-12-10 05:53:02.659158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.669405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee0ea0 00:28:14.984 [2024-12-10 05:53:02.670768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.984 [2024-12-10 05:53:02.670786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.677372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ede8a8 00:28:14.984 [2024-12-10 05:53:02.678030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.984 [2024-12-10 05:53:02.678049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.685998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef0ff8 00:28:14.984 [2024-12-10 05:53:02.687230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.984 [2024-12-10 05:53:02.687249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.694507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee1f80 00:28:14.984 [2024-12-10 05:53:02.695145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.984 [2024-12-10 05:53:02.695162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.705702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef20d8 00:28:14.984 [2024-12-10 05:53:02.706946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.984 [2024-12-10 05:53:02.706966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:14.984 [2024-12-10 05:53:02.715083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee0a68 00:28:14.984 [2024-12-10 05:53:02.716550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.716569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.721569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eed920 00:28:14.985 [2024-12-10 05:53:02.722214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.722233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.731934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efa7d8 00:28:14.985 [2024-12-10 05:53:02.733321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.733340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.739679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef0ff8 00:28:14.985 [2024-12-10 05:53:02.740418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.740440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.749187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef4298 00:28:14.985 [2024-12-10 05:53:02.749992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.750010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.758342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef57b0 00:28:14.985 [2024-12-10 05:53:02.759138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.759156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.767654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eefae0 00:28:14.985 [2024-12-10 05:53:02.768417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.768436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.776958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee2c28 00:28:14.985 [2024-12-10 05:53:02.777943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.777962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.785466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef0ff8 00:28:14.985 [2024-12-10 05:53:02.786429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.786447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.794543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef81e0 00:28:14.985 [2024-12-10 05:53:02.795471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.795490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.803793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef35f0 00:28:14.985 [2024-12-10 05:53:02.804797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.804815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.812729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef0bc0 00:28:14.985 [2024-12-10 05:53:02.813643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.813662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.822136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef7100 00:28:14.985 [2024-12-10 05:53:02.823423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.823441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:14.985 28172.00 IOPS, 110.05 MiB/s [2024-12-10T04:53:02.881Z] [2024-12-10 05:53:02.831326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016edece0 00:28:14.985 [2024-12-10 05:53:02.832557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.832576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.840155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef5be8 00:28:14.985 [2024-12-10 05:53:02.841403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.841422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.849417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ede8a8 00:28:14.985 [2024-12-10 05:53:02.850695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.850714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.858176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efda78 00:28:14.985 [2024-12-10 05:53:02.859232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.859251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:14.985 [2024-12-10 05:53:02.866869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016edf988 00:28:14.985 [2024-12-10 05:53:02.867666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.985 [2024-12-10 05:53:02.867686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:15.244 [2024-12-10 05:53:02.876373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee4140 00:28:15.244 [2024-12-10 05:53:02.877014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.244 [2024-12-10 05:53:02.877033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:15.244 [2024-12-10 05:53:02.885213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee95a0 00:28:15.244 [2024-12-10 05:53:02.886098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.244 [2024-12-10 05:53:02.886117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:15.244 [2024-12-10 05:53:02.895697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016edf550 00:28:15.244 [2024-12-10 05:53:02.897158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.244 [2024-12-10 05:53:02.897181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:02.902238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef2948 00:28:15.245 [2024-12-10 05:53:02.902972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:02.902991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:02.913397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee95a0 00:28:15.245 [2024-12-10 05:53:02.914647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:02.914666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:02.921601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efda78 00:28:15.245 [2024-12-10 05:53:02.922799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:02.922818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:02.931223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee01f8 00:28:15.245 [2024-12-10 05:53:02.932357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:02.932377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:02.940814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee3060 00:28:15.245 [2024-12-10 05:53:02.941845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:02.941865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:02.948316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eeea00 00:28:15.245 [2024-12-10 05:53:02.948755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:02.948774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:02.957447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eebb98 00:28:15.245 [2024-12-10 05:53:02.958104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:02.958122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:02.966562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eebb98 00:28:15.245 [2024-12-10 05:53:02.967278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:02.967297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:02.976730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eebb98 00:28:15.245 [2024-12-10 05:53:02.978022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:02.978045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:02.985314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6890 00:28:15.245 [2024-12-10 05:53:02.986346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:02.986365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:02.993735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef5be8 00:28:15.245 [2024-12-10 05:53:02.994515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:02.994533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.002369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eeea00 00:28:15.245 [2024-12-10 05:53:03.003160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.003183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.013540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eed4e8 00:28:15.245 [2024-12-10 05:53:03.014818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.014837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.021898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee5ec8 00:28:15.245 [2024-12-10 05:53:03.022995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.023013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.030965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eeb760 00:28:15.245 [2024-12-10 05:53:03.031936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.031954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.039917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee4de8 00:28:15.245 [2024-12-10 05:53:03.040915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.040933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.048241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef0ff8 00:28:15.245 [2024-12-10 05:53:03.048833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.048852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.057408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eed920 00:28:15.245 [2024-12-10 05:53:03.057857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.057876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.065949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eea248 00:28:15.245 [2024-12-10 05:53:03.066743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.066762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.076975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef7970 00:28:15.245 [2024-12-10 05:53:03.078312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.078331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.085247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee8088 00:28:15.245 [2024-12-10 05:53:03.086557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.086575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.092966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6458 00:28:15.245 [2024-12-10 05:53:03.093669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.093687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.104035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef96f8 00:28:15.245 [2024-12-10 05:53:03.105250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.105269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.113631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efd208 00:28:15.245 [2024-12-10 05:53:03.114964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.114983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.123148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee84c0 00:28:15.245 [2024-12-10 05:53:03.124608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.124626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:15.245 [2024-12-10 05:53:03.132813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef9b30 00:28:15.245 [2024-12-10 05:53:03.134388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.245 [2024-12-10 05:53:03.134406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.139310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efc128 00:28:15.505 [2024-12-10 05:53:03.140046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.140064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.148767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eff3c8 00:28:15.505 [2024-12-10 05:53:03.149624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.149642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.159025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efac10 00:28:15.505 [2024-12-10 05:53:03.160332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.160350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.167358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eef6a8 00:28:15.505 [2024-12-10 05:53:03.168233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.168251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.176538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee6300 00:28:15.505 [2024-12-10 05:53:03.177292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.177310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.185015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee3d08 00:28:15.505 [2024-12-10 05:53:03.186339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.186357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.193375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef7100 00:28:15.505 [2024-12-10 05:53:03.194005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.194023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.204472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef3e60 00:28:15.505 [2024-12-10 05:53:03.205935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.205953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.210993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efef90 00:28:15.505 [2024-12-10 05:53:03.211685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.211706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.220983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef7970 00:28:15.505 [2024-12-10 05:53:03.221859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.221877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.229425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee8088 00:28:15.505 [2024-12-10 05:53:03.230262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.230280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.238908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eeb760 00:28:15.505 [2024-12-10 05:53:03.239833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.239851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.248299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee2c28 00:28:15.505 [2024-12-10 05:53:03.249404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.249422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.257556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef92c0 00:28:15.505 [2024-12-10 05:53:03.258165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.258188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.266854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee5a90 00:28:15.505 [2024-12-10 05:53:03.267591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.267608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.275032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee8088 00:28:15.505 [2024-12-10 05:53:03.275908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.275927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.284095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6020 00:28:15.505 [2024-12-10 05:53:03.284987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.285005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.293501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef7538 00:28:15.505 [2024-12-10 05:53:03.294565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.294583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.302845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee49b0 00:28:15.505 [2024-12-10 05:53:03.304043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.304060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.311942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee01f8 00:28:15.505 [2024-12-10 05:53:03.312714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.312733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.320409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eeea00 00:28:15.505 [2024-12-10 05:53:03.321808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.321826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.328138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016edf550 00:28:15.505 [2024-12-10 05:53:03.328889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.328907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:15.505 [2024-12-10 05:53:03.337601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef92c0 00:28:15.505 [2024-12-10 05:53:03.338409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.505 [2024-12-10 05:53:03.338427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:15.506 [2024-12-10 05:53:03.346928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef1430 00:28:15.506 [2024-12-10 05:53:03.347880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.506 [2024-12-10 05:53:03.347897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:15.506 [2024-12-10 05:53:03.356335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efdeb0 00:28:15.506 [2024-12-10 05:53:03.357448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.506 [2024-12-10 05:53:03.357467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:15.506 [2024-12-10 05:53:03.365935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efe720 00:28:15.506 [2024-12-10 05:53:03.367187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.506 [2024-12-10 05:53:03.367206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:15.506 [2024-12-10 05:53:03.375185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eefae0 00:28:15.506 [2024-12-10 05:53:03.375964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.506 [2024-12-10 05:53:03.375983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:15.506 [2024-12-10 05:53:03.383745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efd208 00:28:15.506 [2024-12-10 05:53:03.385125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.506 [2024-12-10 05:53:03.385143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:15.506 [2024-12-10 05:53:03.391569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee7c50 00:28:15.506 [2024-12-10 05:53:03.392314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.506 [2024-12-10 05:53:03.392332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:15.765 [2024-12-10 05:53:03.401212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eeaab8 00:28:15.765 [2024-12-10 05:53:03.402080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.765 [2024-12-10 05:53:03.402098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:15.765 [2024-12-10 05:53:03.412185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef7538 00:28:15.765 [2024-12-10 05:53:03.413428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.765 [2024-12-10 05:53:03.413447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:15.765 [2024-12-10 05:53:03.420545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee1710 00:28:15.765 [2024-12-10 05:53:03.421662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.765 [2024-12-10 05:53:03.421680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:15.765 [2024-12-10 05:53:03.429201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef3e60 00:28:15.765 [2024-12-10 05:53:03.430306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.765 [2024-12-10 05:53:03.430324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:15.765 [2024-12-10 05:53:03.437625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee5658 00:28:15.765 [2024-12-10 05:53:03.438374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.438393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.446758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016edece0 00:28:15.766 [2024-12-10 05:53:03.447280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.447301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.456193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef7100 00:28:15.766 [2024-12-10 05:53:03.456830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.456848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.466864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee2c28 00:28:15.766 [2024-12-10 05:53:03.468396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.468414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.473204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016edf118 00:28:15.766 [2024-12-10 05:53:03.473933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.473950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.481710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee1710 00:28:15.766 [2024-12-10 05:53:03.482464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.482482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.492650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef4298 00:28:15.766 [2024-12-10 05:53:03.493743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.493762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.501178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eed0b0 00:28:15.766 [2024-12-10 05:53:03.502280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.502298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.509642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eefae0 00:28:15.766 [2024-12-10 05:53:03.510402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.510431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.519661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef7538 00:28:15.766 [2024-12-10 05:53:03.520844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.520862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.528010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eee5c8 00:28:15.766 [2024-12-10 05:53:03.528839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.528861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.536264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee88f8 00:28:15.766 [2024-12-10 05:53:03.537078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.537095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.546237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee6b70 00:28:15.766 [2024-12-10 05:53:03.547223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.547241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.555515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eee5c8 00:28:15.766 [2024-12-10 05:53:03.556516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.556534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.564058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eed920 00:28:15.766 [2024-12-10 05:53:03.564926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.564945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.573459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eed0b0 00:28:15.766 [2024-12-10 05:53:03.574577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.574596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.581964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6890 00:28:15.766 [2024-12-10 05:53:03.582941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.582960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.590901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eeaab8 00:28:15.766 [2024-12-10 05:53:03.591980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.591998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.599301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ede038 00:28:15.766 [2024-12-10 05:53:03.600032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.600051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.609276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eef270 00:28:15.766 [2024-12-10 05:53:03.610472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.610491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.617829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee5ec8 00:28:15.766 [2024-12-10 05:53:03.618685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.618703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.626783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eeea00 00:28:15.766 [2024-12-10 05:53:03.627639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.627657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.635870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef20d8 00:28:15.766 [2024-12-10 05:53:03.636737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.636755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.645079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef5378 00:28:15.766 [2024-12-10 05:53:03.645932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.645950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:15.766 [2024-12-10 05:53:03.654536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6cc8 00:28:15.766 [2024-12-10 05:53:03.655547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.766 [2024-12-10 05:53:03.655566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:16.025 [2024-12-10 05:53:03.663612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee0a68 00:28:16.026 [2024-12-10 05:53:03.664807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.664825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.672044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eed920 00:28:16.026 [2024-12-10 05:53:03.672882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.672900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.681103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efe2e8 00:28:16.026 [2024-12-10 05:53:03.681728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.681746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.691412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efbcf0 00:28:16.026 [2024-12-10 05:53:03.692835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.692853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.700781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eff3c8 00:28:16.026 [2024-12-10 05:53:03.702290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.702308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.707135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef1430 00:28:16.026 [2024-12-10 05:53:03.707853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.707872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.715643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016eea248 00:28:16.026 [2024-12-10 05:53:03.716347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.716365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.725791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef3e60 00:28:16.026 [2024-12-10 05:53:03.726649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.726667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.735100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6890 00:28:16.026 [2024-12-10 05:53:03.736053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.736072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.743602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efe2e8 00:28:16.026 [2024-12-10 05:53:03.744588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.744607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.753558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee5a90 00:28:16.026 [2024-12-10 05:53:03.754613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.754631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.763101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef46d0 00:28:16.026 [2024-12-10 05:53:03.764432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.764454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.771484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef4b08 00:28:16.026 [2024-12-10 05:53:03.772487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.772506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.781453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef35f0 00:28:16.026 [2024-12-10 05:53:03.782849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.782866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.789790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efd640 00:28:16.026 [2024-12-10 05:53:03.790860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.790879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.797998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee23b8 00:28:16.026 [2024-12-10 05:53:03.799292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.799310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.806303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ef6458 00:28:16.026 [2024-12-10 05:53:03.807015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.807033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.815509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016efc560 00:28:16.026 [2024-12-10 05:53:03.816033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.816051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:16.026 [2024-12-10 05:53:03.824650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455410) with pdu=0x200016ee8d30 00:28:16.026 [2024-12-10 05:53:03.825470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.026 [2024-12-10 05:53:03.825488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:16.026 28212.50 IOPS, 110.21 MiB/s 00:28:16.026 Latency(us) 00:28:16.026 [2024-12-10T04:53:03.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.026 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:16.026 nvme0n1 : 2.00 28227.27 110.26 0.00 0.00 4529.65 1825.65 12233.39 00:28:16.026 [2024-12-10T04:53:03.922Z] =================================================================================================================== 00:28:16.026 [2024-12-10T04:53:03.922Z] Total : 28227.27 110.26 0.00 0.00 4529.65 1825.65 12233.39 00:28:16.026 { 00:28:16.026 "results": [ 00:28:16.026 { 00:28:16.026 "job": "nvme0n1", 00:28:16.026 "core_mask": "0x2", 00:28:16.026 "workload": "randwrite", 00:28:16.026 "status": "finished", 00:28:16.026 "queue_depth": 128, 00:28:16.026 "io_size": 4096, 00:28:16.026 "runtime": 2.003488, 00:28:16.026 "iops": 28227.271638262868, 00:28:16.026 "mibps": 110.26277983696433, 00:28:16.026 "io_failed": 0, 00:28:16.026 "io_timeout": 0, 00:28:16.026 "avg_latency_us": 4529.654734867335, 00:28:16.026 "min_latency_us": 1825.6457142857143, 00:28:16.026 "max_latency_us": 12233.386666666667 00:28:16.026 } 00:28:16.026 ], 00:28:16.026 "core_count": 1 00:28:16.026 } 00:28:16.026 05:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:16.026 05:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:16.026 05:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:16.026 | .driver_specific 00:28:16.026 | .nvme_error 00:28:16.026 | .status_code 00:28:16.026 | .command_transient_transport_error' 00:28:16.026 05:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:16.285 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:28:16.285 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1339914 00:28:16.285 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1339914 ']' 00:28:16.286 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1339914 00:28:16.286 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:16.286 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.286 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1339914 00:28:16.286 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:16.286 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:16.286 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1339914' 00:28:16.286 killing process with pid 1339914 00:28:16.286 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1339914 00:28:16.286 Received shutdown signal, test time was about 2.000000 seconds 00:28:16.286 00:28:16.286 Latency(us) 00:28:16.286 [2024-12-10T04:53:04.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.286 [2024-12-10T04:53:04.182Z] =================================================================================================================== 00:28:16.286 [2024-12-10T04:53:04.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.286 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1339914 00:28:16.544 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:16.544 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:16.544 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:16.544 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:16.544 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:16.544 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1340474 00:28:16.544 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1340474 /var/tmp/bperf.sock 00:28:16.544 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:16.545 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1340474 ']' 00:28:16.545 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.545 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.545 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.545 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.545 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.545 [2024-12-10 05:53:04.320066] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:16.545 [2024-12-10 05:53:04.320116] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340474 ] 00:28:16.545 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:16.545 Zero copy mechanism will not be used. 00:28:16.545 [2024-12-10 05:53:04.393175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.545 [2024-12-10 05:53:04.428794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.803 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.803 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:16.803 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.803 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.061 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:17.061 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.061 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.061 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.061 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.062 05:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.320 nvme0n1 00:28:17.320 05:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:17.320 05:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.320 05:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.320 05:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.320 05:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:17.320 05:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.320 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:17.320 Zero copy mechanism will not be used. 00:28:17.320 Running I/O for 2 seconds... 00:28:17.320 [2024-12-10 05:53:05.183834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.320 [2024-12-10 05:53:05.183937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.320 [2024-12-10 05:53:05.183965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.320 [2024-12-10 05:53:05.189821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.320 [2024-12-10 05:53:05.189897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.320 [2024-12-10 05:53:05.189920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.320 [2024-12-10 05:53:05.194378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.320 [2024-12-10 05:53:05.194451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.320 [2024-12-10 05:53:05.194471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.320 [2024-12-10 05:53:05.198914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.320 [2024-12-10 05:53:05.198974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.320 [2024-12-10 05:53:05.198993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.320 [2024-12-10 05:53:05.203602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.320 [2024-12-10 05:53:05.203676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.320 [2024-12-10 05:53:05.203696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.320 [2024-12-10 05:53:05.208397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.320 [2024-12-10 05:53:05.208452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.320 [2024-12-10 05:53:05.208471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.579 [2024-12-10 05:53:05.213836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.579 [2024-12-10 05:53:05.213890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.579 [2024-12-10 05:53:05.213909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.579 [2024-12-10 05:53:05.219151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.579 [2024-12-10 05:53:05.219244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.579 [2024-12-10 05:53:05.219263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.579 [2024-12-10 05:53:05.223798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.579 [2024-12-10 05:53:05.223867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.579 [2024-12-10 05:53:05.223891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.579 [2024-12-10 05:53:05.228497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.579 [2024-12-10 05:53:05.228556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.579 [2024-12-10 05:53:05.228575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.579 [2024-12-10 05:53:05.233138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.579 [2024-12-10 05:53:05.233221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.579 [2024-12-10 05:53:05.233240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.579 [2024-12-10 05:53:05.237793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.579 [2024-12-10 05:53:05.237892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.579 [2024-12-10 05:53:05.237910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.579 [2024-12-10 05:53:05.242107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.579 [2024-12-10 05:53:05.242236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.579 [2024-12-10 05:53:05.242253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.246408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.246471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.246489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.250661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.250715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.250733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.254947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.255026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.255044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.259163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.259253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.259271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.263454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.263510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.263528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.267642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.267693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.267711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.271845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.271901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.271918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.276023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.276111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.276129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.280256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.280328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.280346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.284437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.284504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.284521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.288608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.288677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.288695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.292778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.292853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.292871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.296935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.296991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.297009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.301125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.301184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.301203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.305326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.305395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.305413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.309504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.309578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.309596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.313698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.313757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.313775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.317876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.317933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.317951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.322086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.322171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.322190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.326327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.326399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.326417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.330554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.330612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.330630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.334731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.334791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.334813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.338933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.338991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.339009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.343107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.343176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.343194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.347315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.347387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.347405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.351494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.351561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.351578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.355645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.355711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.355729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.359838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.359915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.359932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.363948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.364020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.364038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.368172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.368236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.368254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.372352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.372435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.372453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.376516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.376568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.376586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.381155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.381262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.381280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.385569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.385645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.385663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.389838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.389920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.389938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.394061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.394116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.394134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.398298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.398372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.398391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.402517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.402591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.402608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.406718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.406786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.406804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.411478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.411561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.411579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.416135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.416246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.416264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.421346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.421424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.421442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.426685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.426736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.580 [2024-12-10 05:53:05.426754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.580 [2024-12-10 05:53:05.431801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.580 [2024-12-10 05:53:05.431859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.581 [2024-12-10 05:53:05.431877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.581 [2024-12-10 05:53:05.436944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.581 [2024-12-10 05:53:05.437084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.581 [2024-12-10 05:53:05.437102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.581 [2024-12-10 05:53:05.441789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.581 [2024-12-10 05:53:05.442000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.581 [2024-12-10 05:53:05.442020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.581 [2024-12-10 05:53:05.446894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.581 [2024-12-10 05:53:05.447013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.581 [2024-12-10 05:53:05.447031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.581 [2024-12-10 05:53:05.451273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.581 [2024-12-10 05:53:05.451509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.581 [2024-12-10 05:53:05.451532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.581 [2024-12-10 05:53:05.455665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.581 [2024-12-10 05:53:05.455929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.581 [2024-12-10 05:53:05.455947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.581 [2024-12-10 05:53:05.459959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.581 [2024-12-10 05:53:05.460228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.581 [2024-12-10 05:53:05.460247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.581 [2024-12-10 05:53:05.464986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.581 [2024-12-10 05:53:05.465242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.581 [2024-12-10 05:53:05.465261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.470081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.470341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.470361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.476438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.476778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.476798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.482514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.482797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.482816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.487605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.487848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.487867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.492530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.492760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.492779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.497039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.497296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.497315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.501422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.501644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.501663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.505807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.506057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.506076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.509894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.510145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.510171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.513972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.514253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.514273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.518041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.518310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.518329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.522146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.840 [2024-12-10 05:53:05.522402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.840 [2024-12-10 05:53:05.522421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.840 [2024-12-10 05:53:05.526226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.526488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.526507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.530439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.530687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.530706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.534993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.535250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.535269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.539050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.539323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.539343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.543305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.543571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.543590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.547377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.547628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.547647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.551407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.551650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.551669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.555403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.555653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.555672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.559398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.559621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.559639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.563696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.563939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.563958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.568379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.568627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.568650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.573483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.573724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.573743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.577813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.578062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.578082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.582191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.582428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.582447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.586756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.586986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.587005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.590975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.591245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.591264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.595056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.595333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.595352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.599414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.599682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.599701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.603705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.603966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.603985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.608443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.608688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.608707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.613416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.613671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.613690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.618612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.618845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.618864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.623313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.623549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.623568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.628137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.628381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.628401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.633217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.633459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.633479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.638277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.638546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.638565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.643145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.643389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.643408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.647910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.841 [2024-12-10 05:53:05.648147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.841 [2024-12-10 05:53:05.648172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.841 [2024-12-10 05:53:05.652439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.652695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.652715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.656961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.657220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.657240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.661813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.662076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.662095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.666893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.667132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.667151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.671430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.671670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.671689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.675773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.676011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.676029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.679898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.680148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.680173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.684262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.684493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.684512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.688619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.688887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.688910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.693001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.693253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.693272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.697420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.697670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.697689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.701801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.702056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.702074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.706526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.706796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.706815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.710943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.711198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.711219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.715089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.715355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.715373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.719200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.719437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.719457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.723261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.723509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.723528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.842 [2024-12-10 05:53:05.727361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:17.842 [2024-12-10 05:53:05.727606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.842 [2024-12-10 05:53:05.727626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.102 [2024-12-10 05:53:05.731462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.102 [2024-12-10 05:53:05.731712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.102 [2024-12-10 05:53:05.731731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.102 [2024-12-10 05:53:05.735592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.102 [2024-12-10 05:53:05.735839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.102 [2024-12-10 05:53:05.735858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.102 [2024-12-10 05:53:05.739606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.102 [2024-12-10 05:53:05.739847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.102 [2024-12-10 05:53:05.739866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.102 [2024-12-10 05:53:05.743629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.102 [2024-12-10 05:53:05.743881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.102 [2024-12-10 05:53:05.743899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.102 [2024-12-10 05:53:05.747987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.102 [2024-12-10 05:53:05.748249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.102 [2024-12-10 05:53:05.748268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.102 [2024-12-10 05:53:05.752459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.102 [2024-12-10 05:53:05.752732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.102 [2024-12-10 05:53:05.752751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.102 [2024-12-10 05:53:05.756667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.102 [2024-12-10 05:53:05.756897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.102 [2024-12-10 05:53:05.756915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.102 [2024-12-10 05:53:05.760936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.102 [2024-12-10 05:53:05.761198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.102 [2024-12-10 05:53:05.761217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.102 [2024-12-10 05:53:05.765312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.765561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.765580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.769713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.769958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.769977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.774072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.774348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.774368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.778637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.778899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.778918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.783082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.783357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.783376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.787464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.787705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.787724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.791936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.792192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.792210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.796291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.796560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.796579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.800696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.800947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.800969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.805112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.805352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.805371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.809518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.809769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.809788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.813886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.814135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.814154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.818339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.818577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.818596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.822705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.822941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.822960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.827095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.827362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.827382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.831612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.831880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.831900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.836097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.836356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.836375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.840383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.840623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.840642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.844565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.844813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.844832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.850077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.850411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.850431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.856201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.856512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.856531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.863349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.863713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.863733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.870582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.870913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.870933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.878200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.878457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.878476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.884680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.885015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.885034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.892308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.892578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.892597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.899514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.899818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.899837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.103 [2024-12-10 05:53:05.906006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.103 [2024-12-10 05:53:05.906290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.103 [2024-12-10 05:53:05.906309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.912901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.913174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.913194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.918976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.919316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.919336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.925086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.925422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.925441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.931881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.932118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.932137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.938407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.938712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.938730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.944302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.944506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.944525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.949281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.949487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.949510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.953309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.953521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.953540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.957353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.957582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.957600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.961351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.961563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.961582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.965282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.965510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.965529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.969226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.969435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.969454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.973134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.973367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.973385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.977257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.977492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.977510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.982336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.982568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.982586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.986938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.987137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.987154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.104 [2024-12-10 05:53:05.991305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.104 [2024-12-10 05:53:05.991523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.104 [2024-12-10 05:53:05.991542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.364 [2024-12-10 05:53:05.995510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.364 [2024-12-10 05:53:05.995712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.364 [2024-12-10 05:53:05.995729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.364 [2024-12-10 05:53:05.999693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.364 [2024-12-10 05:53:05.999903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.364 [2024-12-10 05:53:05.999922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.364 [2024-12-10 05:53:06.003871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.364 [2024-12-10 05:53:06.004083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.364 [2024-12-10 05:53:06.004107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.364 [2024-12-10 05:53:06.007835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.364 [2024-12-10 05:53:06.008050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.364 [2024-12-10 05:53:06.008068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.364 [2024-12-10 05:53:06.012012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.012233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.012251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.016150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.016361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.016380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.020781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.021007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.021026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.025630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.025839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.025858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.029986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.030223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.030242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.034264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.034471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.034489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.039132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.039363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.039383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.043725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.043924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.043942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.047884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.048088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.048106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.051895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.052111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.052130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.056025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.056241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.056261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.060261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.060476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.060498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.064380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.064596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.064615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.068634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.068846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.068864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.072874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.073075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.073098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.077092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.077332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.077350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.081352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.081592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.081611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.085504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.085712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.085730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.089610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.089816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.089834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.093580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.093784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.093802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.097652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.097850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.097869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.101573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.101786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.101805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.105588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.105769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.105786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.109581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.109782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.109805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.113957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.114151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.114174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.118490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.118657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.118675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.122510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.122676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.365 [2024-12-10 05:53:06.122694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.365 [2024-12-10 05:53:06.126511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.365 [2024-12-10 05:53:06.126679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.126697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.130506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.130672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.130690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.134733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.134876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.134896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.138675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.138842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.138859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.142655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.142826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.142844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.146500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.146687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.146705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.150340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.150493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.150510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.154423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.154573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.154591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.159314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.159474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.159492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.163596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.163741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.163759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.167641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.167813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.167834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.171622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.171769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.171787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.175468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.175636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.175654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.179373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.179565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.179583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.366 6794.00 IOPS, 849.25 MiB/s [2024-12-10T04:53:06.262Z] [2024-12-10 05:53:06.184116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.184309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.184328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.188059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.188256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.188276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.192829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.192995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.193015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.198021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.198254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.198273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.203066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.203233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.203253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.210088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.210210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.210229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.215164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.215325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.215343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.219253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.219444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.219462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.223262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.223425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.223443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.227109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.227300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.227318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.231092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.231262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.231280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.235199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.235344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.235361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.239668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.240036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.240055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.244741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.366 [2024-12-10 05:53:06.244904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.366 [2024-12-10 05:53:06.244922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.366 [2024-12-10 05:53:06.248810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.367 [2024-12-10 05:53:06.248994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.367 [2024-12-10 05:53:06.249012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.367 [2024-12-10 05:53:06.252827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.367 [2024-12-10 05:53:06.252985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.367 [2024-12-10 05:53:06.253002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.256812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.256976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.256993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.260806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.260988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.261006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.264879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.265055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.265072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.268658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.268829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.268847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.272433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.272621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.272638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.276213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.276383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.276400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.279943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.280120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.280144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.283762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.283931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.283948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.287544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.287712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.287730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.291310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.291463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.291481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.295707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.295920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.295939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.300058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.300229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.300246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.304035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.304183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.304201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.307982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.308142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.308159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.312055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.312227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.312245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.316009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.316194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.316211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.319769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.319932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.319950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.323647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.323809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.323826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.327536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.327704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.327723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.331445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.331588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.331606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.335258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.335426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.335444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.339212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.339384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.627 [2024-12-10 05:53:06.339402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.627 [2024-12-10 05:53:06.343486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.627 [2024-12-10 05:53:06.343678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.343695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.348629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.348831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.348850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.355234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.355553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.355571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.360845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.361024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.361041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.366294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.366570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.366590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.371517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.371768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.371786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.376921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.377112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.377130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.382160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.382384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.382402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.387267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.387439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.387457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.392603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.392865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.392883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.397914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.398204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.398226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.403156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.403407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.403426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.408765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.408994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.409012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.413930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.414199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.414217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.419473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.419663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.419681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.424499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.424709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.424728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.429936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.430077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.430095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.435068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.435314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.435333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.439723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.439884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.439902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.443593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.443758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.443776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.447809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.447949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.447967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.451962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.452130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.452149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.456180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.456333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.456353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.460349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.460536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.460554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.464650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.464782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.464799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.469547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.469648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.469665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.474559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.474739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.474756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.628 [2024-12-10 05:53:06.480094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.628 [2024-12-10 05:53:06.480262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.628 [2024-12-10 05:53:06.480279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.629 [2024-12-10 05:53:06.485784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.629 [2024-12-10 05:53:06.485939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.629 [2024-12-10 05:53:06.485956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.629 [2024-12-10 05:53:06.490604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.629 [2024-12-10 05:53:06.490701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.629 [2024-12-10 05:53:06.490719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.629 [2024-12-10 05:53:06.495060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.629 [2024-12-10 05:53:06.495187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.629 [2024-12-10 05:53:06.495205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.629 [2024-12-10 05:53:06.500095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.629 [2024-12-10 05:53:06.500194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.629 [2024-12-10 05:53:06.500212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.629 [2024-12-10 05:53:06.504379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.629 [2024-12-10 05:53:06.504490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.629 [2024-12-10 05:53:06.504508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.629 [2024-12-10 05:53:06.508878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.629 [2024-12-10 05:53:06.508991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.629 [2024-12-10 05:53:06.509008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.629 [2024-12-10 05:53:06.513289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.629 [2024-12-10 05:53:06.513414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.629 [2024-12-10 05:53:06.513432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.889 [2024-12-10 05:53:06.518581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.889 [2024-12-10 05:53:06.518707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.889 [2024-12-10 05:53:06.518724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.889 [2024-12-10 05:53:06.523533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.889 [2024-12-10 05:53:06.523643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.889 [2024-12-10 05:53:06.523665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.889 [2024-12-10 05:53:06.527834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.889 [2024-12-10 05:53:06.527951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.889 [2024-12-10 05:53:06.527969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.889 [2024-12-10 05:53:06.532186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.889 [2024-12-10 05:53:06.532304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.889 [2024-12-10 05:53:06.532322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.889 [2024-12-10 05:53:06.536576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.889 [2024-12-10 05:53:06.536676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.889 [2024-12-10 05:53:06.536694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.889 [2024-12-10 05:53:06.541036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.889 [2024-12-10 05:53:06.541153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.889 [2024-12-10 05:53:06.541176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.889 [2024-12-10 05:53:06.545682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.889 [2024-12-10 05:53:06.545821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.545839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.550898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.550984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.551002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.555480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.555577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.555594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.560015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.560151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.560173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.564907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.565045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.565063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.569643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.569801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.569818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.574030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.574201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.574218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.578386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.578512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.578530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.582338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.582455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.582472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.586511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.586617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.586634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.590689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.590796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.590814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.595078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.595219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.595237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.599009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.599117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.599134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.603539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.603645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.603662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.608188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.608302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.608320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.612131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.612269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.612287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.615907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.616057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.616074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.620109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.620288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.620305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.625529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.625747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.625766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.631342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.631595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.631614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.637478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.637630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.637648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.642053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.642231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.642253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.645966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.646110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.646127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.650046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.650189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.650207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.654406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.654568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.654585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.659418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.659643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.659663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.664551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.664860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.664880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.669913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.890 [2024-12-10 05:53:06.670214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.890 [2024-12-10 05:53:06.670234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.890 [2024-12-10 05:53:06.675097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.675291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.675309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.680163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.680401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.680421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.685407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.685696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.685715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.690591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.690808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.690828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.695972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.696155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.696179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.700472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.700629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.700647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.704880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.705092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.705112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.710156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.710316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.710339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.714673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.714838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.714857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.719376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.719547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.719566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.723980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.724144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.724162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.728641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.728794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.728812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.732870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.733045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.733064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.737519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.737667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.737685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.742081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.742239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.742257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.746562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.746724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.746742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.750868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.751017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.751035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.755431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.755584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.755602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.760029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.760205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.760223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.764896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.765069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.765091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.769517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.769675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.769693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.773831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.773989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.774007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.891 [2024-12-10 05:53:06.778183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:18.891 [2024-12-10 05:53:06.778333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.891 [2024-12-10 05:53:06.778351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.782915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.783062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.783080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.787569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.787717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.787734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.792230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.792398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.792416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.796641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.796805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.796824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.801398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.801545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.801563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.806001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.806158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.806181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.810005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.810155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.810180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.813832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.813987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.814005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.817598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.817761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.817779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.821509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.821662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.821680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.825536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.825683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.825701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.829468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.829616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.829634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.833402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.833553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.833571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.837339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.837513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.837531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.841277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.841451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.841469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.846133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.846297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.846315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.850294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.850445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.850462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.854412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.854571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.854589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.858456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.858608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.858626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.862303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.862455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.862473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.866163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.866320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-12-10 05:53:06.866338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.152 [2024-12-10 05:53:06.870213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.152 [2024-12-10 05:53:06.870370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.870388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.874118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.874273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.874296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.878022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.878176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.878194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.881907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.882075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.882093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.885878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.886044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.886063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.889755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.889920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.889938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.893732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.893898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.893917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.897707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.897873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.897891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.901693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.901846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.901863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.905593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.905741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.905759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.909539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.909703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.909722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.913527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.913681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.913700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.917524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.917671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.917688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.921528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.921676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.921694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.925559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.925724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.925743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.929547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.929699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.929719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.933541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.933694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.933712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.937491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.937639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.937656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.941426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.941575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.941594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.945288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.945462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.945480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.949161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.949359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.949377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.953949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.954206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.954226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.959592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.959803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.959823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.965153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.965353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.965372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.971266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.971448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.971466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.977750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.978034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.978054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.984360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.984567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.984587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.990587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.990769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.153 [2024-12-10 05:53:06.990791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.153 [2024-12-10 05:53:06.997655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.153 [2024-12-10 05:53:06.997954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.154 [2024-12-10 05:53:06.997974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.154 [2024-12-10 05:53:07.003871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.154 [2024-12-10 05:53:07.004066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.154 [2024-12-10 05:53:07.004084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.154 [2024-12-10 05:53:07.009621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.154 [2024-12-10 05:53:07.009910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.154 [2024-12-10 05:53:07.009929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.154 [2024-12-10 05:53:07.016075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.154 [2024-12-10 05:53:07.016292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.154 [2024-12-10 05:53:07.016311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.154 [2024-12-10 05:53:07.022332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.154 [2024-12-10 05:53:07.022508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.154 [2024-12-10 05:53:07.022526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.154 [2024-12-10 05:53:07.028433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.154 [2024-12-10 05:53:07.028726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.154 [2024-12-10 05:53:07.028746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.154 [2024-12-10 05:53:07.034720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.154 [2024-12-10 05:53:07.035020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.154 [2024-12-10 05:53:07.035039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.154 [2024-12-10 05:53:07.040781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.154 [2024-12-10 05:53:07.041006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.154 [2024-12-10 05:53:07.041025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.047003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.047251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.047271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.053539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.053740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.053760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.059126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.059321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.059340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.063807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.063976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.063994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.067782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.067952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.067970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.071668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.071842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.071860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.075559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.075731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.075749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.079434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.079588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.079606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.083246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.083400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.083417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.087078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.087243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.087261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.090916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.091071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.091089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.094730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.094891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.094909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.098588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.098752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.098770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.102447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.102612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.102631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.106286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.106450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.106468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.110121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.110279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.110297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.113943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.114099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.114117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.117745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.117915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.117937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.121578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.121740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.121758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.125375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.125542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.125560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.129196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.129375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.129393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.133020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.133180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.414 [2024-12-10 05:53:07.133198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.414 [2024-12-10 05:53:07.137119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.414 [2024-12-10 05:53:07.137283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.137301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.415 [2024-12-10 05:53:07.141697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.141847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.141865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.415 [2024-12-10 05:53:07.145971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.146130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.146149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.415 [2024-12-10 05:53:07.149914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.150069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.150087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.415 [2024-12-10 05:53:07.154158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.154330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.154348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.415 [2024-12-10 05:53:07.158004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.158162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.158187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.415 [2024-12-10 05:53:07.161882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.162038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.162056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.415 [2024-12-10 05:53:07.165754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.165912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.165930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.415 [2024-12-10 05:53:07.169611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.169782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.169801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.415 [2024-12-10 05:53:07.173444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.173605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.173623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.415 [2024-12-10 05:53:07.177279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.177433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.177451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.415 [2024-12-10 05:53:07.181075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.181238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.181256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.415 6826.00 IOPS, 853.25 MiB/s [2024-12-10T04:53:07.311Z] [2024-12-10 05:53:07.185541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455750) with pdu=0x200016eff3c8 00:28:19.415 [2024-12-10 05:53:07.185592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-12-10 05:53:07.185610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.415 00:28:19.415 Latency(us) 00:28:19.415 [2024-12-10T04:53:07.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.415 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:19.415 nvme0n1 : 2.00 6824.43 853.05 0.00 0.00 2340.39 1427.75 9611.95 00:28:19.415 [2024-12-10T04:53:07.311Z] =================================================================================================================== 00:28:19.415 [2024-12-10T04:53:07.311Z] Total : 6824.43 853.05 0.00 0.00 2340.39 1427.75 9611.95 00:28:19.415 { 00:28:19.415 "results": [ 00:28:19.415 { 00:28:19.415 "job": "nvme0n1", 00:28:19.415 "core_mask": "0x2", 00:28:19.415 "workload": "randwrite", 00:28:19.415 "status": "finished", 00:28:19.415 "queue_depth": 16, 00:28:19.415 "io_size": 131072, 00:28:19.415 "runtime": 2.003391, 00:28:19.415 "iops": 6824.42918032476, 00:28:19.415 "mibps": 853.053647540595, 00:28:19.415 "io_failed": 0, 00:28:19.415 "io_timeout": 0, 00:28:19.415 "avg_latency_us": 2340.387965114659, 00:28:19.415 "min_latency_us": 1427.7485714285715, 00:28:19.415 "max_latency_us": 9611.946666666667 00:28:19.415 } 00:28:19.415 ], 00:28:19.415 "core_count": 1 00:28:19.415 } 00:28:19.415 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:19.415 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:19.415 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:19.415 | .driver_specific 00:28:19.415 | .nvme_error 00:28:19.415 | .status_code 00:28:19.415 | .command_transient_transport_error' 00:28:19.415 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:19.674 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 442 > 0 )) 00:28:19.674 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1340474 00:28:19.674 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1340474 ']' 00:28:19.674 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1340474 00:28:19.674 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:19.674 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:19.674 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1340474 00:28:19.674 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:19.674 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:19.674 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1340474' 00:28:19.675 killing process with pid 1340474 00:28:19.675 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1340474 00:28:19.675 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.675 00:28:19.675 Latency(us) 00:28:19.675 [2024-12-10T04:53:07.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.675 [2024-12-10T04:53:07.571Z] =================================================================================================================== 00:28:19.675 [2024-12-10T04:53:07.571Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.675 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1340474 00:28:19.933 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1338754 00:28:19.933 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1338754 ']' 00:28:19.933 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1338754 00:28:19.933 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:19.933 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:19.933 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1338754 00:28:19.933 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:19.933 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:19.933 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1338754' 00:28:19.933 killing process with pid 1338754 00:28:19.933 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1338754 00:28:19.933 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1338754 00:28:20.193 00:28:20.193 real 0m14.150s 00:28:20.193 user 0m27.014s 00:28:20.193 sys 0m4.704s 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.193 ************************************ 00:28:20.193 END TEST nvmf_digest_error 00:28:20.193 ************************************ 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:20.193 rmmod nvme_tcp 00:28:20.193 rmmod nvme_fabrics 00:28:20.193 rmmod nvme_keyring 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1338754 ']' 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1338754 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1338754 ']' 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1338754 00:28:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1338754) - No such process 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1338754 is not found' 00:28:20.193 Process with pid 1338754 is not found 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.193 05:53:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.728 05:53:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:22.728 00:28:22.728 real 0m36.771s 00:28:22.728 user 0m56.207s 00:28:22.728 sys 0m13.860s 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:22.728 ************************************ 00:28:22.728 END TEST nvmf_digest 00:28:22.728 ************************************ 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.728 ************************************ 00:28:22.728 START TEST nvmf_bdevperf 00:28:22.728 ************************************ 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:22.728 * Looking for test storage... 00:28:22.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:22.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.728 --rc genhtml_branch_coverage=1 00:28:22.728 --rc genhtml_function_coverage=1 00:28:22.728 --rc genhtml_legend=1 00:28:22.728 --rc geninfo_all_blocks=1 00:28:22.728 --rc geninfo_unexecuted_blocks=1 00:28:22.728 00:28:22.728 ' 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:22.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.728 --rc genhtml_branch_coverage=1 00:28:22.728 --rc genhtml_function_coverage=1 00:28:22.728 --rc genhtml_legend=1 00:28:22.728 --rc geninfo_all_blocks=1 00:28:22.728 --rc geninfo_unexecuted_blocks=1 00:28:22.728 00:28:22.728 ' 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:22.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.728 --rc genhtml_branch_coverage=1 00:28:22.728 --rc genhtml_function_coverage=1 00:28:22.728 --rc genhtml_legend=1 00:28:22.728 --rc geninfo_all_blocks=1 00:28:22.728 --rc geninfo_unexecuted_blocks=1 00:28:22.728 00:28:22.728 ' 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:22.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.728 --rc genhtml_branch_coverage=1 00:28:22.728 --rc genhtml_function_coverage=1 00:28:22.728 --rc genhtml_legend=1 00:28:22.728 --rc geninfo_all_blocks=1 00:28:22.728 --rc geninfo_unexecuted_blocks=1 00:28:22.728 00:28:22.728 ' 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:22.728 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:22.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:22.729 05:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:28.001 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:28.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:28.001 Found net devices under 0000:af:00.0: cvl_0_0 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.001 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:28.001 Found net devices under 0000:af:00.1: cvl_0_1 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.002 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.261 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.261 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.261 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:28.261 05:53:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:28.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:28:28.261 00:28:28.261 --- 10.0.0.2 ping statistics --- 00:28:28.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.261 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:28:28.261 00:28:28.261 --- 10.0.0.1 ping statistics --- 00:28:28.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.261 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1344522 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1344522 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1344522 ']' 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.261 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.520 [2024-12-10 05:53:16.188019] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:28.520 [2024-12-10 05:53:16.188066] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.520 [2024-12-10 05:53:16.263922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:28.520 [2024-12-10 05:53:16.304793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.520 [2024-12-10 05:53:16.304830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.520 [2024-12-10 05:53:16.304837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.520 [2024-12-10 05:53:16.304843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.520 [2024-12-10 05:53:16.304848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.520 [2024-12-10 05:53:16.306201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.520 [2024-12-10 05:53:16.306307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.520 [2024-12-10 05:53:16.306309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.520 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.520 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:28.520 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:28.520 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.520 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.779 [2024-12-10 05:53:16.442796] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.779 Malloc0 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.779 [2024-12-10 05:53:16.513744] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.779 { 00:28:28.779 "params": { 00:28:28.779 "name": "Nvme$subsystem", 00:28:28.779 "trtype": "$TEST_TRANSPORT", 00:28:28.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.779 "adrfam": "ipv4", 00:28:28.779 "trsvcid": "$NVMF_PORT", 00:28:28.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.779 "hdgst": ${hdgst:-false}, 00:28:28.779 "ddgst": ${ddgst:-false} 00:28:28.779 }, 00:28:28.779 "method": "bdev_nvme_attach_controller" 00:28:28.779 } 00:28:28.779 EOF 00:28:28.779 )") 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:28.779 05:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:28.779 "params": { 00:28:28.779 "name": "Nvme1", 00:28:28.779 "trtype": "tcp", 00:28:28.779 "traddr": "10.0.0.2", 00:28:28.779 "adrfam": "ipv4", 00:28:28.779 "trsvcid": "4420", 00:28:28.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.779 "hdgst": false, 00:28:28.779 "ddgst": false 00:28:28.779 }, 00:28:28.779 "method": "bdev_nvme_attach_controller" 00:28:28.779 }' 00:28:28.779 [2024-12-10 05:53:16.567646] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:28.779 [2024-12-10 05:53:16.567687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344552 ] 00:28:28.779 [2024-12-10 05:53:16.640254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.038 [2024-12-10 05:53:16.680673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.296 Running I/O for 1 seconds... 00:28:30.232 11474.00 IOPS, 44.82 MiB/s 00:28:30.232 Latency(us) 00:28:30.232 [2024-12-10T04:53:18.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.232 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.232 Verification LBA range: start 0x0 length 0x4000 00:28:30.232 Nvme1n1 : 1.01 11532.86 45.05 0.00 0.00 11057.48 1318.52 11047.50 00:28:30.232 [2024-12-10T04:53:18.128Z] =================================================================================================================== 00:28:30.232 [2024-12-10T04:53:18.128Z] Total : 11532.86 45.05 0.00 0.00 11057.48 1318.52 11047.50 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1344780 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.491 { 00:28:30.491 "params": { 00:28:30.491 "name": "Nvme$subsystem", 00:28:30.491 "trtype": "$TEST_TRANSPORT", 00:28:30.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.491 "adrfam": "ipv4", 00:28:30.491 "trsvcid": "$NVMF_PORT", 00:28:30.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.491 "hdgst": ${hdgst:-false}, 00:28:30.491 "ddgst": ${ddgst:-false} 00:28:30.491 }, 00:28:30.491 "method": "bdev_nvme_attach_controller" 00:28:30.491 } 00:28:30.491 EOF 00:28:30.491 )") 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:30.491 05:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:30.491 "params": { 00:28:30.491 "name": "Nvme1", 00:28:30.491 "trtype": "tcp", 00:28:30.491 "traddr": "10.0.0.2", 00:28:30.491 "adrfam": "ipv4", 00:28:30.491 "trsvcid": "4420", 00:28:30.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:30.491 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:30.491 "hdgst": false, 00:28:30.491 "ddgst": false 00:28:30.491 }, 00:28:30.491 "method": "bdev_nvme_attach_controller" 00:28:30.491 }' 00:28:30.491 [2024-12-10 05:53:18.177534] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:30.491 [2024-12-10 05:53:18.177579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344780 ] 00:28:30.491 [2024-12-10 05:53:18.252627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.491 [2024-12-10 05:53:18.289793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.750 Running I/O for 15 seconds... 00:28:33.061 11282.00 IOPS, 44.07 MiB/s [2024-12-10T04:53:21.218Z] 11468.00 IOPS, 44.80 MiB/s [2024-12-10T04:53:21.218Z] 05:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1344522 00:28:33.322 05:53:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:33.322 [2024-12-10 05:53:21.146081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.322 [2024-12-10 05:53:21.146240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.322 [2024-12-10 05:53:21.146255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.322 [2024-12-10 05:53:21.146270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.322 [2024-12-10 05:53:21.146285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.322 [2024-12-10 05:53:21.146300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.322 [2024-12-10 05:53:21.146315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.322 [2024-12-10 05:53:21.146331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.322 [2024-12-10 05:53:21.146346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.322 [2024-12-10 05:53:21.146361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.322 [2024-12-10 05:53:21.146380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.322 [2024-12-10 05:53:21.146395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.322 [2024-12-10 05:53:21.146526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.322 [2024-12-10 05:53:21.146534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.323 [2024-12-10 05:53:21.146540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.323 [2024-12-10 05:53:21.146554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.323 [2024-12-10 05:53:21.146571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.323 [2024-12-10 05:53:21.146585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.323 [2024-12-10 05:53:21.146600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.323 [2024-12-10 05:53:21.146614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.323 [2024-12-10 05:53:21.146627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.323 [2024-12-10 05:53:21.146642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.146988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.146996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.147003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.147011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.147017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.147025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.147032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.147040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.147046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.147054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.147060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.147068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.147074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.147082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.147088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.147096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.147102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.147116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.323 [2024-12-10 05:53:21.147124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.323 [2024-12-10 05:53:21.147132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.324 [2024-12-10 05:53:21.147827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.324 [2024-12-10 05:53:21.147835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.147841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.147850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.147856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.147864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.147871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.147879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.147885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.147893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.147899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.147907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.147913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.147921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.147927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.147936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.147942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.147950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.147956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.147964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.147972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.147980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.147986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.147994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.148000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.148014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.148028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.148042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.148056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.148070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.148086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.148100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.148114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.325 [2024-12-10 05:53:21.148128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5510 is same with the state(6) to be set 00:28:33.325 [2024-12-10 05:53:21.148143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.325 [2024-12-10 05:53:21.148149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.325 [2024-12-10 05:53:21.148157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104728 len:8 PRP1 0x0 PRP2 0x0 00:28:33.325 [2024-12-10 05:53:21.148164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.325 [2024-12-10 05:53:21.148258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.325 [2024-12-10 05:53:21.148271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.325 [2024-12-10 05:53:21.148285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.325 [2024-12-10 05:53:21.148297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.325 [2024-12-10 05:53:21.148304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.325 [2024-12-10 05:53:21.151074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.325 [2024-12-10 05:53:21.151099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.325 [2024-12-10 05:53:21.151698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-12-10 05:53:21.151715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.325 [2024-12-10 05:53:21.151724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.325 [2024-12-10 05:53:21.151898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.325 [2024-12-10 05:53:21.152071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.325 [2024-12-10 05:53:21.152078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.325 [2024-12-10 05:53:21.152086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.325 [2024-12-10 05:53:21.152093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.325 [2024-12-10 05:53:21.164306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.325 [2024-12-10 05:53:21.164679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-12-10 05:53:21.164697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.325 [2024-12-10 05:53:21.164706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.325 [2024-12-10 05:53:21.164879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.325 [2024-12-10 05:53:21.165053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.325 [2024-12-10 05:53:21.165061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.325 [2024-12-10 05:53:21.165072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.325 [2024-12-10 05:53:21.165078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.325 [2024-12-10 05:53:21.177145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.325 [2024-12-10 05:53:21.177527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-12-10 05:53:21.177544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.325 [2024-12-10 05:53:21.177552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.325 [2024-12-10 05:53:21.177720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.325 [2024-12-10 05:53:21.177889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.325 [2024-12-10 05:53:21.177897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.325 [2024-12-10 05:53:21.177903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.325 [2024-12-10 05:53:21.177910] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.325 [2024-12-10 05:53:21.189967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.325 [2024-12-10 05:53:21.190413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.325 [2024-12-10 05:53:21.190431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.326 [2024-12-10 05:53:21.190438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.326 [2024-12-10 05:53:21.190606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.326 [2024-12-10 05:53:21.190773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.326 [2024-12-10 05:53:21.190782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.326 [2024-12-10 05:53:21.190788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.326 [2024-12-10 05:53:21.190794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.326 [2024-12-10 05:53:21.202796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.326 [2024-12-10 05:53:21.203300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.326 [2024-12-10 05:53:21.203346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.326 [2024-12-10 05:53:21.203369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.326 [2024-12-10 05:53:21.203894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.326 [2024-12-10 05:53:21.204054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.326 [2024-12-10 05:53:21.204063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.326 [2024-12-10 05:53:21.204068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.326 [2024-12-10 05:53:21.204074] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.586 [2024-12-10 05:53:21.215719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.586 [2024-12-10 05:53:21.216175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-12-10 05:53:21.216193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.586 [2024-12-10 05:53:21.216200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.586 [2024-12-10 05:53:21.216369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.586 [2024-12-10 05:53:21.216536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.586 [2024-12-10 05:53:21.216544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.586 [2024-12-10 05:53:21.216551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.586 [2024-12-10 05:53:21.216557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.586 [2024-12-10 05:53:21.228549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.586 [2024-12-10 05:53:21.228854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-12-10 05:53:21.228872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.586 [2024-12-10 05:53:21.228879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.586 [2024-12-10 05:53:21.229047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.586 [2024-12-10 05:53:21.229221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.586 [2024-12-10 05:53:21.229229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.586 [2024-12-10 05:53:21.229236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.586 [2024-12-10 05:53:21.229242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.586 [2024-12-10 05:53:21.241560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.586 [2024-12-10 05:53:21.241906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-12-10 05:53:21.241924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.586 [2024-12-10 05:53:21.241931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.586 [2024-12-10 05:53:21.242103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.586 [2024-12-10 05:53:21.242283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.586 [2024-12-10 05:53:21.242292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.586 [2024-12-10 05:53:21.242299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.586 [2024-12-10 05:53:21.242305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.586 [2024-12-10 05:53:21.254669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.586 [2024-12-10 05:53:21.255075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-12-10 05:53:21.255092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.586 [2024-12-10 05:53:21.255102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.586 [2024-12-10 05:53:21.255300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.586 [2024-12-10 05:53:21.255483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.586 [2024-12-10 05:53:21.255491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.586 [2024-12-10 05:53:21.255498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.586 [2024-12-10 05:53:21.255504] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.586 [2024-12-10 05:53:21.267752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.586 [2024-12-10 05:53:21.268199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-12-10 05:53:21.268217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.586 [2024-12-10 05:53:21.268225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.586 [2024-12-10 05:53:21.268408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.586 [2024-12-10 05:53:21.268592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.586 [2024-12-10 05:53:21.268601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.586 [2024-12-10 05:53:21.268608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.586 [2024-12-10 05:53:21.268614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.586 [2024-12-10 05:53:21.280905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.586 [2024-12-10 05:53:21.281307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-12-10 05:53:21.281325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.587 [2024-12-10 05:53:21.281332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.587 [2024-12-10 05:53:21.281516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.587 [2024-12-10 05:53:21.281700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.587 [2024-12-10 05:53:21.281708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.587 [2024-12-10 05:53:21.281715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.587 [2024-12-10 05:53:21.281721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.587 [2024-12-10 05:53:21.293981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.587 [2024-12-10 05:53:21.294403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-12-10 05:53:21.294420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.587 [2024-12-10 05:53:21.294427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.587 [2024-12-10 05:53:21.294600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.587 [2024-12-10 05:53:21.294775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.587 [2024-12-10 05:53:21.294784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.587 [2024-12-10 05:53:21.294790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.587 [2024-12-10 05:53:21.294797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.587 [2024-12-10 05:53:21.307028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.587 [2024-12-10 05:53:21.307448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-12-10 05:53:21.307465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.587 [2024-12-10 05:53:21.307472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.587 [2024-12-10 05:53:21.307656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.587 [2024-12-10 05:53:21.307839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.587 [2024-12-10 05:53:21.307848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.587 [2024-12-10 05:53:21.307855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.587 [2024-12-10 05:53:21.307862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.587 [2024-12-10 05:53:21.320149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.587 [2024-12-10 05:53:21.320554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-12-10 05:53:21.320572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.587 [2024-12-10 05:53:21.320579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.587 [2024-12-10 05:53:21.320752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.587 [2024-12-10 05:53:21.320924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.587 [2024-12-10 05:53:21.320932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.587 [2024-12-10 05:53:21.320939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.587 [2024-12-10 05:53:21.320945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.587 [2024-12-10 05:53:21.333208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.587 [2024-12-10 05:53:21.333643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-12-10 05:53:21.333659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.587 [2024-12-10 05:53:21.333666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.587 [2024-12-10 05:53:21.333838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.587 [2024-12-10 05:53:21.334012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.587 [2024-12-10 05:53:21.334020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.587 [2024-12-10 05:53:21.334030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.587 [2024-12-10 05:53:21.334036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.587 [2024-12-10 05:53:21.346217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.587 [2024-12-10 05:53:21.346648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-12-10 05:53:21.346664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.587 [2024-12-10 05:53:21.346671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.587 [2024-12-10 05:53:21.346844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.587 [2024-12-10 05:53:21.347016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.587 [2024-12-10 05:53:21.347024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.587 [2024-12-10 05:53:21.347030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.587 [2024-12-10 05:53:21.347036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.587 [2024-12-10 05:53:21.359438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.587 [2024-12-10 05:53:21.359859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-12-10 05:53:21.359876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.587 [2024-12-10 05:53:21.359884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.587 [2024-12-10 05:53:21.360067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.587 [2024-12-10 05:53:21.360256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.587 [2024-12-10 05:53:21.360265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.587 [2024-12-10 05:53:21.360271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.587 [2024-12-10 05:53:21.360278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.587 [2024-12-10 05:53:21.372539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.587 [2024-12-10 05:53:21.372964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-12-10 05:53:21.372981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.587 [2024-12-10 05:53:21.372988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.587 [2024-12-10 05:53:21.373177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.587 [2024-12-10 05:53:21.373360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.587 [2024-12-10 05:53:21.373369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.587 [2024-12-10 05:53:21.373375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.587 [2024-12-10 05:53:21.373381] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.587 [2024-12-10 05:53:21.386043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.587 [2024-12-10 05:53:21.386517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-12-10 05:53:21.386534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.587 [2024-12-10 05:53:21.386542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.587 [2024-12-10 05:53:21.386725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.587 [2024-12-10 05:53:21.386908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.587 [2024-12-10 05:53:21.386917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.587 [2024-12-10 05:53:21.386923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.587 [2024-12-10 05:53:21.386930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.587 [2024-12-10 05:53:21.399253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.587 [2024-12-10 05:53:21.399699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-12-10 05:53:21.399717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.587 [2024-12-10 05:53:21.399724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.587 [2024-12-10 05:53:21.399908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.587 [2024-12-10 05:53:21.400091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.587 [2024-12-10 05:53:21.400100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.587 [2024-12-10 05:53:21.400107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.587 [2024-12-10 05:53:21.400113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.587 [2024-12-10 05:53:21.412516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.587 [2024-12-10 05:53:21.412916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-12-10 05:53:21.412933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.587 [2024-12-10 05:53:21.412940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.588 [2024-12-10 05:53:21.413137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.588 [2024-12-10 05:53:21.413328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.588 [2024-12-10 05:53:21.413337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.588 [2024-12-10 05:53:21.413344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.588 [2024-12-10 05:53:21.413350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.588 [2024-12-10 05:53:21.425605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.588 [2024-12-10 05:53:21.425958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-12-10 05:53:21.425975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.588 [2024-12-10 05:53:21.425987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.588 [2024-12-10 05:53:21.426176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.588 [2024-12-10 05:53:21.426372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.588 [2024-12-10 05:53:21.426381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.588 [2024-12-10 05:53:21.426387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.588 [2024-12-10 05:53:21.426394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.588 [2024-12-10 05:53:21.438544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.588 [2024-12-10 05:53:21.438833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-12-10 05:53:21.438849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.588 [2024-12-10 05:53:21.438856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.588 [2024-12-10 05:53:21.439024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.588 [2024-12-10 05:53:21.439216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.588 [2024-12-10 05:53:21.439225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.588 [2024-12-10 05:53:21.439231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.588 [2024-12-10 05:53:21.439238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.588 [2024-12-10 05:53:21.451445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.588 [2024-12-10 05:53:21.451747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-12-10 05:53:21.451764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.588 [2024-12-10 05:53:21.451771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.588 [2024-12-10 05:53:21.451943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.588 [2024-12-10 05:53:21.452116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.588 [2024-12-10 05:53:21.452124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.588 [2024-12-10 05:53:21.452130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.588 [2024-12-10 05:53:21.452137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.588 [2024-12-10 05:53:21.464386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.588 [2024-12-10 05:53:21.464739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-12-10 05:53:21.464755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.588 [2024-12-10 05:53:21.464762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.588 [2024-12-10 05:53:21.464929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.588 [2024-12-10 05:53:21.465099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.588 [2024-12-10 05:53:21.465107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.588 [2024-12-10 05:53:21.465113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.588 [2024-12-10 05:53:21.465118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.848 [2024-12-10 05:53:21.477377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.848 [2024-12-10 05:53:21.477670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.848 [2024-12-10 05:53:21.477687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.848 [2024-12-10 05:53:21.477694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.848 [2024-12-10 05:53:21.477861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.848 [2024-12-10 05:53:21.478033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.848 [2024-12-10 05:53:21.478041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.848 [2024-12-10 05:53:21.478047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.848 [2024-12-10 05:53:21.478053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.848 [2024-12-10 05:53:21.490300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.848 [2024-12-10 05:53:21.490582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.848 [2024-12-10 05:53:21.490629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.848 [2024-12-10 05:53:21.490652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.848 [2024-12-10 05:53:21.491250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.848 [2024-12-10 05:53:21.491557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.848 [2024-12-10 05:53:21.491565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.848 [2024-12-10 05:53:21.491572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.848 [2024-12-10 05:53:21.491579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.848 [2024-12-10 05:53:21.503140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.848 [2024-12-10 05:53:21.503503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.848 [2024-12-10 05:53:21.503519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.848 [2024-12-10 05:53:21.503526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.848 [2024-12-10 05:53:21.503694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.848 [2024-12-10 05:53:21.503861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.848 [2024-12-10 05:53:21.503869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.848 [2024-12-10 05:53:21.503879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.848 [2024-12-10 05:53:21.503885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.848 [2024-12-10 05:53:21.516078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.848 [2024-12-10 05:53:21.516498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.848 [2024-12-10 05:53:21.516516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.848 [2024-12-10 05:53:21.516523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.848 [2024-12-10 05:53:21.516690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.848 [2024-12-10 05:53:21.516857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.848 [2024-12-10 05:53:21.516865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.848 [2024-12-10 05:53:21.516871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.848 [2024-12-10 05:53:21.516877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.849 [2024-12-10 05:53:21.528953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.849 [2024-12-10 05:53:21.529402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.849 [2024-12-10 05:53:21.529419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.849 [2024-12-10 05:53:21.529426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.849 [2024-12-10 05:53:21.529594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.849 [2024-12-10 05:53:21.529761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.849 [2024-12-10 05:53:21.529769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.849 [2024-12-10 05:53:21.529775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.849 [2024-12-10 05:53:21.529781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.849 [2024-12-10 05:53:21.541753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.849 [2024-12-10 05:53:21.542101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.849 [2024-12-10 05:53:21.542117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.849 [2024-12-10 05:53:21.542124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.849 [2024-12-10 05:53:21.542297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.849 [2024-12-10 05:53:21.542464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.849 [2024-12-10 05:53:21.542472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.849 [2024-12-10 05:53:21.542478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.849 [2024-12-10 05:53:21.542484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.849 [2024-12-10 05:53:21.554588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.849 [2024-12-10 05:53:21.554942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.849 [2024-12-10 05:53:21.554958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.849 [2024-12-10 05:53:21.554966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.849 [2024-12-10 05:53:21.555133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.849 [2024-12-10 05:53:21.555310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.849 [2024-12-10 05:53:21.555318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.849 [2024-12-10 05:53:21.555325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.849 [2024-12-10 05:53:21.555331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.849 [2024-12-10 05:53:21.567352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.849 [2024-12-10 05:53:21.567694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.849 [2024-12-10 05:53:21.567711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.849 [2024-12-10 05:53:21.567717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.849 [2024-12-10 05:53:21.567885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.849 [2024-12-10 05:53:21.568053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.849 [2024-12-10 05:53:21.568060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.849 [2024-12-10 05:53:21.568066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.849 [2024-12-10 05:53:21.568072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.849 [2024-12-10 05:53:21.580194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.849 [2024-12-10 05:53:21.580621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.849 [2024-12-10 05:53:21.580666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.849 [2024-12-10 05:53:21.580689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.849 [2024-12-10 05:53:21.581286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.849 [2024-12-10 05:53:21.581857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.849 [2024-12-10 05:53:21.581865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.849 [2024-12-10 05:53:21.581872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.849 [2024-12-10 05:53:21.581877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.849 9797.33 IOPS, 38.27 MiB/s [2024-12-10T04:53:21.745Z] [2024-12-10 05:53:21.593213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.849 [2024-12-10 05:53:21.593649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.849 [2024-12-10 05:53:21.593672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.849 [2024-12-10 05:53:21.593679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.849 [2024-12-10 05:53:21.593847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.849 [2024-12-10 05:53:21.594015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.849 [2024-12-10 05:53:21.594023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.849 [2024-12-10 05:53:21.594029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.849 [2024-12-10 05:53:21.594035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.849 [2024-12-10 05:53:21.606069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.849 [2024-12-10 05:53:21.606507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.849 [2024-12-10 05:53:21.606523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.849 [2024-12-10 05:53:21.606530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.849 [2024-12-10 05:53:21.606698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.849 [2024-12-10 05:53:21.606865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.849 [2024-12-10 05:53:21.606873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.849 [2024-12-10 05:53:21.606880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.849 [2024-12-10 05:53:21.606885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.849 [2024-12-10 05:53:21.618946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.849 [2024-12-10 05:53:21.619360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.849 [2024-12-10 05:53:21.619376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.849 [2024-12-10 05:53:21.619383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.849 [2024-12-10 05:53:21.619541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.849 [2024-12-10 05:53:21.619699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.849 [2024-12-10 05:53:21.619707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.849 [2024-12-10 05:53:21.619713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.849 [2024-12-10 05:53:21.619718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.849 [2024-12-10 05:53:21.631696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.849 [2024-12-10 05:53:21.632124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.849 [2024-12-10 05:53:21.632184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.849 [2024-12-10 05:53:21.632208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.849 [2024-12-10 05:53:21.632682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.849 [2024-12-10 05:53:21.632856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.849 [2024-12-10 05:53:21.632865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.849 [2024-12-10 05:53:21.632871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.849 [2024-12-10 05:53:21.632877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.849 [2024-12-10 05:53:21.644489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.849 [2024-12-10 05:53:21.644908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.849 [2024-12-10 05:53:21.644924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.849 [2024-12-10 05:53:21.644931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.849 [2024-12-10 05:53:21.645089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.849 [2024-12-10 05:53:21.645272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.849 [2024-12-10 05:53:21.645281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.849 [2024-12-10 05:53:21.645287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.849 [2024-12-10 05:53:21.645293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.849 [2024-12-10 05:53:21.657255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.849 [2024-12-10 05:53:21.657672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.850 [2024-12-10 05:53:21.657689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.850 [2024-12-10 05:53:21.657696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.850 [2024-12-10 05:53:21.657869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.850 [2024-12-10 05:53:21.658041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.850 [2024-12-10 05:53:21.658049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.850 [2024-12-10 05:53:21.658056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.850 [2024-12-10 05:53:21.658063] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.850 [2024-12-10 05:53:21.670276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.850 [2024-12-10 05:53:21.670682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.850 [2024-12-10 05:53:21.670698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.850 [2024-12-10 05:53:21.670706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.850 [2024-12-10 05:53:21.670878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.850 [2024-12-10 05:53:21.671050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.850 [2024-12-10 05:53:21.671058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.850 [2024-12-10 05:53:21.671068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.850 [2024-12-10 05:53:21.671075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.850 [2024-12-10 05:53:21.683215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.850 [2024-12-10 05:53:21.683675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.850 [2024-12-10 05:53:21.683692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.850 [2024-12-10 05:53:21.683699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.850 [2024-12-10 05:53:21.683866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.850 [2024-12-10 05:53:21.684037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.850 [2024-12-10 05:53:21.684044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.850 [2024-12-10 05:53:21.684050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.850 [2024-12-10 05:53:21.684057] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.850 [2024-12-10 05:53:21.695962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.850 [2024-12-10 05:53:21.696415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.850 [2024-12-10 05:53:21.696460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.850 [2024-12-10 05:53:21.696483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.850 [2024-12-10 05:53:21.696998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.850 [2024-12-10 05:53:21.697172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.850 [2024-12-10 05:53:21.697180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.850 [2024-12-10 05:53:21.697186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.850 [2024-12-10 05:53:21.697193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.850 [2024-12-10 05:53:21.708717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.850 [2024-12-10 05:53:21.709058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.850 [2024-12-10 05:53:21.709074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.850 [2024-12-10 05:53:21.709081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.850 [2024-12-10 05:53:21.709264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.850 [2024-12-10 05:53:21.709432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.850 [2024-12-10 05:53:21.709440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.850 [2024-12-10 05:53:21.709446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.850 [2024-12-10 05:53:21.709452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.850 [2024-12-10 05:53:21.721505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.850 [2024-12-10 05:53:21.721940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.850 [2024-12-10 05:53:21.721956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.850 [2024-12-10 05:53:21.721963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.850 [2024-12-10 05:53:21.722130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.850 [2024-12-10 05:53:21.722303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.850 [2024-12-10 05:53:21.722312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.850 [2024-12-10 05:53:21.722318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.850 [2024-12-10 05:53:21.722324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.850 [2024-12-10 05:53:21.734364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.850 [2024-12-10 05:53:21.734786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.850 [2024-12-10 05:53:21.734802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:33.850 [2024-12-10 05:53:21.734809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:33.850 [2024-12-10 05:53:21.734977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:33.850 [2024-12-10 05:53:21.735144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.850 [2024-12-10 05:53:21.735152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.850 [2024-12-10 05:53:21.735158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.850 [2024-12-10 05:53:21.735164] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.113 [2024-12-10 05:53:21.747149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.113 [2024-12-10 05:53:21.747588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-12-10 05:53:21.747605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.113 [2024-12-10 05:53:21.747612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.113 [2024-12-10 05:53:21.747779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.113 [2024-12-10 05:53:21.747946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.113 [2024-12-10 05:53:21.747954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.113 [2024-12-10 05:53:21.747960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.113 [2024-12-10 05:53:21.747966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.113 [2024-12-10 05:53:21.760030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.113 [2024-12-10 05:53:21.760469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-12-10 05:53:21.760489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.113 [2024-12-10 05:53:21.760496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.113 [2024-12-10 05:53:21.760664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.113 [2024-12-10 05:53:21.760831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.113 [2024-12-10 05:53:21.760839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.113 [2024-12-10 05:53:21.760845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.113 [2024-12-10 05:53:21.760850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.113 [2024-12-10 05:53:21.772892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.113 [2024-12-10 05:53:21.773299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-12-10 05:53:21.773345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.113 [2024-12-10 05:53:21.773368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.113 [2024-12-10 05:53:21.773949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.113 [2024-12-10 05:53:21.774481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.113 [2024-12-10 05:53:21.774489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.113 [2024-12-10 05:53:21.774496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.113 [2024-12-10 05:53:21.774502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.113 [2024-12-10 05:53:21.785662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.113 [2024-12-10 05:53:21.786088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-12-10 05:53:21.786131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.113 [2024-12-10 05:53:21.786155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.113 [2024-12-10 05:53:21.786753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.113 [2024-12-10 05:53:21.787346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.113 [2024-12-10 05:53:21.787373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.113 [2024-12-10 05:53:21.787380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.113 [2024-12-10 05:53:21.787386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.113 [2024-12-10 05:53:21.798403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.113 [2024-12-10 05:53:21.798810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-12-10 05:53:21.798857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.114 [2024-12-10 05:53:21.798881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.114 [2024-12-10 05:53:21.799397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.114 [2024-12-10 05:53:21.799570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.114 [2024-12-10 05:53:21.799577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.114 [2024-12-10 05:53:21.799583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.114 [2024-12-10 05:53:21.799590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.114 [2024-12-10 05:53:21.811252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.114 [2024-12-10 05:53:21.811670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-12-10 05:53:21.811686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.114 [2024-12-10 05:53:21.811693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.114 [2024-12-10 05:53:21.811851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.114 [2024-12-10 05:53:21.812009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.114 [2024-12-10 05:53:21.812017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.114 [2024-12-10 05:53:21.812022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.114 [2024-12-10 05:53:21.812028] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.114 [2024-12-10 05:53:21.824063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.114 [2024-12-10 05:53:21.824395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-12-10 05:53:21.824411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.114 [2024-12-10 05:53:21.824418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.114 [2024-12-10 05:53:21.824585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.114 [2024-12-10 05:53:21.824753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.114 [2024-12-10 05:53:21.824760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.114 [2024-12-10 05:53:21.824766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.114 [2024-12-10 05:53:21.824772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.114 [2024-12-10 05:53:21.836810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.114 [2024-12-10 05:53:21.837221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-12-10 05:53:21.837237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.114 [2024-12-10 05:53:21.837244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.114 [2024-12-10 05:53:21.837402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.114 [2024-12-10 05:53:21.837560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.114 [2024-12-10 05:53:21.837568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.114 [2024-12-10 05:53:21.837577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.114 [2024-12-10 05:53:21.837583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.114 [2024-12-10 05:53:21.849615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.114 [2024-12-10 05:53:21.850013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-12-10 05:53:21.850056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.114 [2024-12-10 05:53:21.850079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.114 [2024-12-10 05:53:21.850610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.114 [2024-12-10 05:53:21.851001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.114 [2024-12-10 05:53:21.851017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.114 [2024-12-10 05:53:21.851031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.114 [2024-12-10 05:53:21.851044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.114 [2024-12-10 05:53:21.864639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.114 [2024-12-10 05:53:21.865162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-12-10 05:53:21.865189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.114 [2024-12-10 05:53:21.865199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.114 [2024-12-10 05:53:21.865453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.114 [2024-12-10 05:53:21.865708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.114 [2024-12-10 05:53:21.865719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.114 [2024-12-10 05:53:21.865728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.114 [2024-12-10 05:53:21.865736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.114 [2024-12-10 05:53:21.877728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.114 [2024-12-10 05:53:21.878188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-12-10 05:53:21.878236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.115 [2024-12-10 05:53:21.878260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.115 [2024-12-10 05:53:21.878723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.115 [2024-12-10 05:53:21.878896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.115 [2024-12-10 05:53:21.878904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.115 [2024-12-10 05:53:21.878911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.115 [2024-12-10 05:53:21.878917] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.115 [2024-12-10 05:53:21.890509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.115 [2024-12-10 05:53:21.890905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-12-10 05:53:21.890921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.115 [2024-12-10 05:53:21.890928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.115 [2024-12-10 05:53:21.891087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.115 [2024-12-10 05:53:21.891270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.115 [2024-12-10 05:53:21.891278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.115 [2024-12-10 05:53:21.891284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.115 [2024-12-10 05:53:21.891290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.115 [2024-12-10 05:53:21.903239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.115 [2024-12-10 05:53:21.903573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-12-10 05:53:21.903589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.115 [2024-12-10 05:53:21.903596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.115 [2024-12-10 05:53:21.903754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.115 [2024-12-10 05:53:21.903912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.115 [2024-12-10 05:53:21.903920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.115 [2024-12-10 05:53:21.903926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.115 [2024-12-10 05:53:21.903931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.115 [2024-12-10 05:53:21.915969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.115 [2024-12-10 05:53:21.916413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-12-10 05:53:21.916430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.115 [2024-12-10 05:53:21.916437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.115 [2024-12-10 05:53:21.916609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.115 [2024-12-10 05:53:21.916783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.115 [2024-12-10 05:53:21.916791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.115 [2024-12-10 05:53:21.916797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.115 [2024-12-10 05:53:21.916804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.115 [2024-12-10 05:53:21.928988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.115 [2024-12-10 05:53:21.929428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-12-10 05:53:21.929445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.115 [2024-12-10 05:53:21.929456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.115 [2024-12-10 05:53:21.929630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.115 [2024-12-10 05:53:21.929801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.115 [2024-12-10 05:53:21.929809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.115 [2024-12-10 05:53:21.929815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.115 [2024-12-10 05:53:21.929821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.115 [2024-12-10 05:53:21.941889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.115 [2024-12-10 05:53:21.942314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-12-10 05:53:21.942331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.115 [2024-12-10 05:53:21.942338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.115 [2024-12-10 05:53:21.942506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.115 [2024-12-10 05:53:21.942678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.115 [2024-12-10 05:53:21.942686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.115 [2024-12-10 05:53:21.942692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.115 [2024-12-10 05:53:21.942698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.115 [2024-12-10 05:53:21.954708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.115 [2024-12-10 05:53:21.955146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-12-10 05:53:21.955162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.116 [2024-12-10 05:53:21.955175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.116 [2024-12-10 05:53:21.955342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.116 [2024-12-10 05:53:21.955509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.116 [2024-12-10 05:53:21.955517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.116 [2024-12-10 05:53:21.955523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.116 [2024-12-10 05:53:21.955529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.116 [2024-12-10 05:53:21.967463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.116 [2024-12-10 05:53:21.967892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-12-10 05:53:21.967936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.116 [2024-12-10 05:53:21.967958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.116 [2024-12-10 05:53:21.968530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.116 [2024-12-10 05:53:21.968702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.116 [2024-12-10 05:53:21.968711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.116 [2024-12-10 05:53:21.968717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.116 [2024-12-10 05:53:21.968723] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.116 [2024-12-10 05:53:21.980273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.116 [2024-12-10 05:53:21.980698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-12-10 05:53:21.980742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.116 [2024-12-10 05:53:21.980765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.116 [2024-12-10 05:53:21.981362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.116 [2024-12-10 05:53:21.981902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.116 [2024-12-10 05:53:21.981910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.116 [2024-12-10 05:53:21.981916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.116 [2024-12-10 05:53:21.981922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.116 [2024-12-10 05:53:21.993003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.116 [2024-12-10 05:53:21.993413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-12-10 05:53:21.993459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.116 [2024-12-10 05:53:21.993481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.116 [2024-12-10 05:53:21.993975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.116 [2024-12-10 05:53:21.994143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.116 [2024-12-10 05:53:21.994151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.116 [2024-12-10 05:53:21.994157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.116 [2024-12-10 05:53:21.994163] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.487 [2024-12-10 05:53:22.006435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.487 [2024-12-10 05:53:22.006877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.487 [2024-12-10 05:53:22.006894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.487 [2024-12-10 05:53:22.006901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.487 [2024-12-10 05:53:22.007075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.487 [2024-12-10 05:53:22.007275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.487 [2024-12-10 05:53:22.007284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.487 [2024-12-10 05:53:22.007295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.487 [2024-12-10 05:53:22.007301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.487 [2024-12-10 05:53:22.019399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.487 [2024-12-10 05:53:22.019835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.487 [2024-12-10 05:53:22.019852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.487 [2024-12-10 05:53:22.019859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.487 [2024-12-10 05:53:22.020031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.487 [2024-12-10 05:53:22.020210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.487 [2024-12-10 05:53:22.020219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.487 [2024-12-10 05:53:22.020226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.487 [2024-12-10 05:53:22.020232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.487 [2024-12-10 05:53:22.032399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.487 [2024-12-10 05:53:22.032834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.487 [2024-12-10 05:53:22.032878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.487 [2024-12-10 05:53:22.032901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.487 [2024-12-10 05:53:22.033494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.487 [2024-12-10 05:53:22.034062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.487 [2024-12-10 05:53:22.034069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.487 [2024-12-10 05:53:22.034076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.487 [2024-12-10 05:53:22.034083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.487 [2024-12-10 05:53:22.045229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.487 [2024-12-10 05:53:22.045642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.487 [2024-12-10 05:53:22.045658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.487 [2024-12-10 05:53:22.045665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.487 [2024-12-10 05:53:22.045823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.487 [2024-12-10 05:53:22.045982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.487 [2024-12-10 05:53:22.045989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.487 [2024-12-10 05:53:22.045995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.487 [2024-12-10 05:53:22.046001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.487 [2024-12-10 05:53:22.058052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.487 [2024-12-10 05:53:22.058495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.487 [2024-12-10 05:53:22.058512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.487 [2024-12-10 05:53:22.058519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.487 [2024-12-10 05:53:22.058691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.487 [2024-12-10 05:53:22.058863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.487 [2024-12-10 05:53:22.058871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.487 [2024-12-10 05:53:22.058877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.487 [2024-12-10 05:53:22.058883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.487 [2024-12-10 05:53:22.070812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.487 [2024-12-10 05:53:22.071227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.487 [2024-12-10 05:53:22.071243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.487 [2024-12-10 05:53:22.071250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.487 [2024-12-10 05:53:22.071414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.487 [2024-12-10 05:53:22.071573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.487 [2024-12-10 05:53:22.071580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.487 [2024-12-10 05:53:22.071586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.487 [2024-12-10 05:53:22.071592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.487 [2024-12-10 05:53:22.083652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.487 [2024-12-10 05:53:22.084089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.487 [2024-12-10 05:53:22.084106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.487 [2024-12-10 05:53:22.084112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.487 [2024-12-10 05:53:22.084285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.487 [2024-12-10 05:53:22.084453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.487 [2024-12-10 05:53:22.084460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.487 [2024-12-10 05:53:22.084466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.487 [2024-12-10 05:53:22.084472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.487 [2024-12-10 05:53:22.096394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.487 [2024-12-10 05:53:22.096820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.487 [2024-12-10 05:53:22.096865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.487 [2024-12-10 05:53:22.096895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.487 [2024-12-10 05:53:22.097493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.488 [2024-12-10 05:53:22.098083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.488 [2024-12-10 05:53:22.098091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.488 [2024-12-10 05:53:22.098098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.488 [2024-12-10 05:53:22.098104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.488 [2024-12-10 05:53:22.109191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.488 [2024-12-10 05:53:22.109606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.488 [2024-12-10 05:53:22.109622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.488 [2024-12-10 05:53:22.109629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.488 [2024-12-10 05:53:22.109797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.488 [2024-12-10 05:53:22.109963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.488 [2024-12-10 05:53:22.109971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.488 [2024-12-10 05:53:22.109977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.488 [2024-12-10 05:53:22.109983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.488 [2024-12-10 05:53:22.122007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.488 [2024-12-10 05:53:22.122441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.488 [2024-12-10 05:53:22.122458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.488 [2024-12-10 05:53:22.122465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.488 [2024-12-10 05:53:22.122632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.488 [2024-12-10 05:53:22.122799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.488 [2024-12-10 05:53:22.122807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.488 [2024-12-10 05:53:22.122813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.488 [2024-12-10 05:53:22.122818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.488 [2024-12-10 05:53:22.134808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.488 [2024-12-10 05:53:22.135234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.488 [2024-12-10 05:53:22.135250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.488 [2024-12-10 05:53:22.135256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.488 [2024-12-10 05:53:22.135415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.488 [2024-12-10 05:53:22.135577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.488 [2024-12-10 05:53:22.135584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.488 [2024-12-10 05:53:22.135590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.488 [2024-12-10 05:53:22.135596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.488 [2024-12-10 05:53:22.147572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.488 [2024-12-10 05:53:22.147902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.488 [2024-12-10 05:53:22.147918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.488 [2024-12-10 05:53:22.147924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.488 [2024-12-10 05:53:22.148082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.488 [2024-12-10 05:53:22.148264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.488 [2024-12-10 05:53:22.148272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.488 [2024-12-10 05:53:22.148278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.488 [2024-12-10 05:53:22.148284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.488 [2024-12-10 05:53:22.160530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.488 [2024-12-10 05:53:22.160953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.488 [2024-12-10 05:53:22.160998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.488 [2024-12-10 05:53:22.161021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.488 [2024-12-10 05:53:22.161620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.488 [2024-12-10 05:53:22.162213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.488 [2024-12-10 05:53:22.162239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.488 [2024-12-10 05:53:22.162259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.488 [2024-12-10 05:53:22.162278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.488 [2024-12-10 05:53:22.173270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.488 [2024-12-10 05:53:22.173708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.488 [2024-12-10 05:53:22.173726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.488 [2024-12-10 05:53:22.173733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.488 [2024-12-10 05:53:22.173901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.488 [2024-12-10 05:53:22.174069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.488 [2024-12-10 05:53:22.174077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.488 [2024-12-10 05:53:22.174087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.488 [2024-12-10 05:53:22.174093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.488 [2024-12-10 05:53:22.186299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.488 [2024-12-10 05:53:22.186701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.488 [2024-12-10 05:53:22.186718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.488 [2024-12-10 05:53:22.186725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.488 [2024-12-10 05:53:22.186897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.488 [2024-12-10 05:53:22.187070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.488 [2024-12-10 05:53:22.187078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.488 [2024-12-10 05:53:22.187084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.488 [2024-12-10 05:53:22.187090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.488 [2024-12-10 05:53:22.199070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.488 [2024-12-10 05:53:22.199400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.488 [2024-12-10 05:53:22.199418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.488 [2024-12-10 05:53:22.199425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.488 [2024-12-10 05:53:22.199592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.488 [2024-12-10 05:53:22.199759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.488 [2024-12-10 05:53:22.199767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.488 [2024-12-10 05:53:22.199773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.488 [2024-12-10 05:53:22.199779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.488 [2024-12-10 05:53:22.212183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.488 [2024-12-10 05:53:22.212614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.488 [2024-12-10 05:53:22.212630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.488 [2024-12-10 05:53:22.212637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.488 [2024-12-10 05:53:22.212809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.488 [2024-12-10 05:53:22.212981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.488 [2024-12-10 05:53:22.212989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.488 [2024-12-10 05:53:22.212995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.488 [2024-12-10 05:53:22.213002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.488 [2024-12-10 05:53:22.224967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.488 [2024-12-10 05:53:22.225355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.488 [2024-12-10 05:53:22.225372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.488 [2024-12-10 05:53:22.225378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.488 [2024-12-10 05:53:22.225536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.489 [2024-12-10 05:53:22.225694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.489 [2024-12-10 05:53:22.225701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.489 [2024-12-10 05:53:22.225707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.489 [2024-12-10 05:53:22.225712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.489 [2024-12-10 05:53:22.237740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.489 [2024-12-10 05:53:22.238100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.489 [2024-12-10 05:53:22.238143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.489 [2024-12-10 05:53:22.238179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.489 [2024-12-10 05:53:22.238765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.489 [2024-12-10 05:53:22.239358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.489 [2024-12-10 05:53:22.239384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.489 [2024-12-10 05:53:22.239404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.489 [2024-12-10 05:53:22.239423] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.489 [2024-12-10 05:53:22.250514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.489 [2024-12-10 05:53:22.250839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.489 [2024-12-10 05:53:22.250856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.489 [2024-12-10 05:53:22.250862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.489 [2024-12-10 05:53:22.251030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.489 [2024-12-10 05:53:22.251220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.489 [2024-12-10 05:53:22.251229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.489 [2024-12-10 05:53:22.251235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.489 [2024-12-10 05:53:22.251241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.489 [2024-12-10 05:53:22.263244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.489 [2024-12-10 05:53:22.263668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.489 [2024-12-10 05:53:22.263711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.489 [2024-12-10 05:53:22.263741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.489 [2024-12-10 05:53:22.264337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.489 [2024-12-10 05:53:22.264727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.489 [2024-12-10 05:53:22.264735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.489 [2024-12-10 05:53:22.264741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.489 [2024-12-10 05:53:22.264747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.489 [2024-12-10 05:53:22.276115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.489 [2024-12-10 05:53:22.276408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.489 [2024-12-10 05:53:22.276425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.489 [2024-12-10 05:53:22.276432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.489 [2024-12-10 05:53:22.276600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.489 [2024-12-10 05:53:22.276767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.489 [2024-12-10 05:53:22.276775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.489 [2024-12-10 05:53:22.276782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.489 [2024-12-10 05:53:22.276788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.489 [2024-12-10 05:53:22.288916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.489 [2024-12-10 05:53:22.289315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.489 [2024-12-10 05:53:22.289332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.489 [2024-12-10 05:53:22.289339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.489 [2024-12-10 05:53:22.289507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.489 [2024-12-10 05:53:22.289674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.489 [2024-12-10 05:53:22.289683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.489 [2024-12-10 05:53:22.289689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.489 [2024-12-10 05:53:22.289695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.489 [2024-12-10 05:53:22.301752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.489 [2024-12-10 05:53:22.302162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.489 [2024-12-10 05:53:22.302224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.489 [2024-12-10 05:53:22.302248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.489 [2024-12-10 05:53:22.302759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.489 [2024-12-10 05:53:22.302931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.489 [2024-12-10 05:53:22.302939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.489 [2024-12-10 05:53:22.302946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.489 [2024-12-10 05:53:22.302952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.489 [2024-12-10 05:53:22.314521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.489 [2024-12-10 05:53:22.314913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.489 [2024-12-10 05:53:22.314929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.489 [2024-12-10 05:53:22.314936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.489 [2024-12-10 05:53:22.315094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.489 [2024-12-10 05:53:22.315279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.489 [2024-12-10 05:53:22.315288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.489 [2024-12-10 05:53:22.315294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.489 [2024-12-10 05:53:22.315300] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.489 [2024-12-10 05:53:22.327505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.489 [2024-12-10 05:53:22.327979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.489 [2024-12-10 05:53:22.327995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.489 [2024-12-10 05:53:22.328002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.489 [2024-12-10 05:53:22.328182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.489 [2024-12-10 05:53:22.328354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.489 [2024-12-10 05:53:22.328362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.489 [2024-12-10 05:53:22.328369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.489 [2024-12-10 05:53:22.328375] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.489 [2024-12-10 05:53:22.340461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.489 [2024-12-10 05:53:22.340906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.489 [2024-12-10 05:53:22.340922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.489 [2024-12-10 05:53:22.340929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.489 [2024-12-10 05:53:22.341101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.489 [2024-12-10 05:53:22.341282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.489 [2024-12-10 05:53:22.341290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.489 [2024-12-10 05:53:22.341300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.489 [2024-12-10 05:53:22.341307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.489 [2024-12-10 05:53:22.353437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.489 [2024-12-10 05:53:22.353818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.489 [2024-12-10 05:53:22.353834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.489 [2024-12-10 05:53:22.353841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.489 [2024-12-10 05:53:22.354014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.489 [2024-12-10 05:53:22.354194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.490 [2024-12-10 05:53:22.354202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.490 [2024-12-10 05:53:22.354209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.490 [2024-12-10 05:53:22.354215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.490 [2024-12-10 05:53:22.366453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.490 [2024-12-10 05:53:22.366867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.490 [2024-12-10 05:53:22.366884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.490 [2024-12-10 05:53:22.366891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.490 [2024-12-10 05:53:22.367064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.490 [2024-12-10 05:53:22.367242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.490 [2024-12-10 05:53:22.367250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.490 [2024-12-10 05:53:22.367257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.490 [2024-12-10 05:53:22.367263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.795 [2024-12-10 05:53:22.379473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.795 [2024-12-10 05:53:22.379853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.795 [2024-12-10 05:53:22.379869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.795 [2024-12-10 05:53:22.379876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.795 [2024-12-10 05:53:22.380048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.795 [2024-12-10 05:53:22.380227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.795 [2024-12-10 05:53:22.380235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.795 [2024-12-10 05:53:22.380241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.795 [2024-12-10 05:53:22.380248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.795 [2024-12-10 05:53:22.392436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.795 [2024-12-10 05:53:22.392857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.795 [2024-12-10 05:53:22.392874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.795 [2024-12-10 05:53:22.392881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.795 [2024-12-10 05:53:22.393054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.795 [2024-12-10 05:53:22.393232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.795 [2024-12-10 05:53:22.393241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.795 [2024-12-10 05:53:22.393248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.795 [2024-12-10 05:53:22.393254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.795 [2024-12-10 05:53:22.405208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.795 [2024-12-10 05:53:22.405595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.795 [2024-12-10 05:53:22.405610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.795 [2024-12-10 05:53:22.405617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.795 [2024-12-10 05:53:22.405776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.795 [2024-12-10 05:53:22.405934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.795 [2024-12-10 05:53:22.405941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.795 [2024-12-10 05:53:22.405947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.795 [2024-12-10 05:53:22.405953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.795 [2024-12-10 05:53:22.418080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.795 [2024-12-10 05:53:22.418497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.795 [2024-12-10 05:53:22.418514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.795 [2024-12-10 05:53:22.418521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.795 [2024-12-10 05:53:22.418689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.795 [2024-12-10 05:53:22.418856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.795 [2024-12-10 05:53:22.418864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.795 [2024-12-10 05:53:22.418870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.795 [2024-12-10 05:53:22.418876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.795 [2024-12-10 05:53:22.431017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.795 [2024-12-10 05:53:22.431410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.795 [2024-12-10 05:53:22.431427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.795 [2024-12-10 05:53:22.431437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.795 [2024-12-10 05:53:22.431610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.795 [2024-12-10 05:53:22.431783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.795 [2024-12-10 05:53:22.431791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.795 [2024-12-10 05:53:22.431798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.795 [2024-12-10 05:53:22.431804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.795 [2024-12-10 05:53:22.444015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.795 [2024-12-10 05:53:22.444444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.795 [2024-12-10 05:53:22.444461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.795 [2024-12-10 05:53:22.444468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.795 [2024-12-10 05:53:22.444641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.795 [2024-12-10 05:53:22.444813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.795 [2024-12-10 05:53:22.444821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.795 [2024-12-10 05:53:22.444827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.795 [2024-12-10 05:53:22.444834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.795 [2024-12-10 05:53:22.456921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.795 [2024-12-10 05:53:22.457342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.795 [2024-12-10 05:53:22.457359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.795 [2024-12-10 05:53:22.457366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.795 [2024-12-10 05:53:22.457534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.795 [2024-12-10 05:53:22.457701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.795 [2024-12-10 05:53:22.457709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.795 [2024-12-10 05:53:22.457715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.795 [2024-12-10 05:53:22.457721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.795 [2024-12-10 05:53:22.469787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.795 [2024-12-10 05:53:22.470182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.795 [2024-12-10 05:53:22.470198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.795 [2024-12-10 05:53:22.470205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.795 [2024-12-10 05:53:22.470364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.795 [2024-12-10 05:53:22.470526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.795 [2024-12-10 05:53:22.470534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.795 [2024-12-10 05:53:22.470540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.795 [2024-12-10 05:53:22.470545] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.795 [2024-12-10 05:53:22.482578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.796 [2024-12-10 05:53:22.482894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.796 [2024-12-10 05:53:22.482911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.796 [2024-12-10 05:53:22.482917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.796 [2024-12-10 05:53:22.483075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.796 [2024-12-10 05:53:22.483264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.796 [2024-12-10 05:53:22.483273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.796 [2024-12-10 05:53:22.483279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.796 [2024-12-10 05:53:22.483285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.796 [2024-12-10 05:53:22.495399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.796 [2024-12-10 05:53:22.495814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.796 [2024-12-10 05:53:22.495830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.796 [2024-12-10 05:53:22.495837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.796 [2024-12-10 05:53:22.496005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.796 [2024-12-10 05:53:22.496178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.796 [2024-12-10 05:53:22.496187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.796 [2024-12-10 05:53:22.496193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.796 [2024-12-10 05:53:22.496199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.796 [2024-12-10 05:53:22.508204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.796 [2024-12-10 05:53:22.508594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.796 [2024-12-10 05:53:22.508610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.796 [2024-12-10 05:53:22.508617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.796 [2024-12-10 05:53:22.508775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.796 [2024-12-10 05:53:22.508934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.796 [2024-12-10 05:53:22.508942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.796 [2024-12-10 05:53:22.508951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.796 [2024-12-10 05:53:22.508957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.796 [2024-12-10 05:53:22.521040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.796 [2024-12-10 05:53:22.521464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.796 [2024-12-10 05:53:22.521480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.796 [2024-12-10 05:53:22.521487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.796 [2024-12-10 05:53:22.521655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.796 [2024-12-10 05:53:22.521821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.796 [2024-12-10 05:53:22.521832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.796 [2024-12-10 05:53:22.521840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.796 [2024-12-10 05:53:22.521847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.796 [2024-12-10 05:53:22.533918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.796 [2024-12-10 05:53:22.534360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.796 [2024-12-10 05:53:22.534406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.796 [2024-12-10 05:53:22.534428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.796 [2024-12-10 05:53:22.534906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.796 [2024-12-10 05:53:22.535079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.796 [2024-12-10 05:53:22.535089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.796 [2024-12-10 05:53:22.535095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.796 [2024-12-10 05:53:22.535101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.796 [2024-12-10 05:53:22.546881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.796 [2024-12-10 05:53:22.547251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.796 [2024-12-10 05:53:22.547270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.796 [2024-12-10 05:53:22.547279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.796 [2024-12-10 05:53:22.547461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.796 [2024-12-10 05:53:22.547629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.796 [2024-12-10 05:53:22.547638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.796 [2024-12-10 05:53:22.547644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.796 [2024-12-10 05:53:22.547649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.796 [2024-12-10 05:53:22.559822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.796 [2024-12-10 05:53:22.560185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.796 [2024-12-10 05:53:22.560202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.796 [2024-12-10 05:53:22.560209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.796 [2024-12-10 05:53:22.560377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.796 [2024-12-10 05:53:22.560545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.796 [2024-12-10 05:53:22.560553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.796 [2024-12-10 05:53:22.560559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.796 [2024-12-10 05:53:22.560564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.796 [2024-12-10 05:53:22.572691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.796 [2024-12-10 05:53:22.573032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.796 [2024-12-10 05:53:22.573048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.796 [2024-12-10 05:53:22.573055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.796 [2024-12-10 05:53:22.573229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.796 [2024-12-10 05:53:22.573397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.796 [2024-12-10 05:53:22.573405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.796 [2024-12-10 05:53:22.573411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.796 [2024-12-10 05:53:22.573417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.796 7348.00 IOPS, 28.70 MiB/s [2024-12-10T04:53:22.692Z] [2024-12-10 05:53:22.586759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.796 [2024-12-10 05:53:22.587195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.796 [2024-12-10 05:53:22.587212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.796 [2024-12-10 05:53:22.587219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.796 [2024-12-10 05:53:22.587394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.796 [2024-12-10 05:53:22.587556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.796 [2024-12-10 05:53:22.587564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.796 [2024-12-10 05:53:22.587570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.796 [2024-12-10 05:53:22.587576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.796 [2024-12-10 05:53:22.599630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.796 [2024-12-10 05:53:22.600064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.796 [2024-12-10 05:53:22.600084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.796 [2024-12-10 05:53:22.600091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.796 [2024-12-10 05:53:22.600263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.796 [2024-12-10 05:53:22.600432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.796 [2024-12-10 05:53:22.600440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.796 [2024-12-10 05:53:22.600446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.796 [2024-12-10 05:53:22.600452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.796 [2024-12-10 05:53:22.612536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.796 [2024-12-10 05:53:22.612944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.797 [2024-12-10 05:53:22.612960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.797 [2024-12-10 05:53:22.612968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.797 [2024-12-10 05:53:22.613135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.797 [2024-12-10 05:53:22.613307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.797 [2024-12-10 05:53:22.613316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.797 [2024-12-10 05:53:22.613322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.797 [2024-12-10 05:53:22.613328] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.797 [2024-12-10 05:53:22.625339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.797 [2024-12-10 05:53:22.625664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.797 [2024-12-10 05:53:22.625680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.797 [2024-12-10 05:53:22.625687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.797 [2024-12-10 05:53:22.625854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.797 [2024-12-10 05:53:22.626021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.797 [2024-12-10 05:53:22.626029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.797 [2024-12-10 05:53:22.626035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.797 [2024-12-10 05:53:22.626041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.797 [2024-12-10 05:53:22.638252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.797 [2024-12-10 05:53:22.638532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.797 [2024-12-10 05:53:22.638548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.797 [2024-12-10 05:53:22.638555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.797 [2024-12-10 05:53:22.638726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.797 [2024-12-10 05:53:22.638896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.797 [2024-12-10 05:53:22.638904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.797 [2024-12-10 05:53:22.638910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.797 [2024-12-10 05:53:22.638916] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.797 [2024-12-10 05:53:22.651216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.797 [2024-12-10 05:53:22.651576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.797 [2024-12-10 05:53:22.651592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.797 [2024-12-10 05:53:22.651599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.797 [2024-12-10 05:53:22.651771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.797 [2024-12-10 05:53:22.651947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.797 [2024-12-10 05:53:22.651956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.797 [2024-12-10 05:53:22.651962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.797 [2024-12-10 05:53:22.651969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.797 [2024-12-10 05:53:22.664314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.797 [2024-12-10 05:53:22.664665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.797 [2024-12-10 05:53:22.664681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.797 [2024-12-10 05:53:22.664688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.797 [2024-12-10 05:53:22.664861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.797 [2024-12-10 05:53:22.665034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.797 [2024-12-10 05:53:22.665042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.797 [2024-12-10 05:53:22.665048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.797 [2024-12-10 05:53:22.665054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.797 [2024-12-10 05:53:22.677347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.797 [2024-12-10 05:53:22.677630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.797 [2024-12-10 05:53:22.677647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:34.797 [2024-12-10 05:53:22.677653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:34.797 [2024-12-10 05:53:22.677821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:34.797 [2024-12-10 05:53:22.677989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.797 [2024-12-10 05:53:22.677996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.797 [2024-12-10 05:53:22.678006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.797 [2024-12-10 05:53:22.678012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.057 [2024-12-10 05:53:22.690326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.057 [2024-12-10 05:53:22.690682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-12-10 05:53:22.690698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.057 [2024-12-10 05:53:22.690705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.057 [2024-12-10 05:53:22.690878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.057 [2024-12-10 05:53:22.691050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.057 [2024-12-10 05:53:22.691058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.057 [2024-12-10 05:53:22.691065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.057 [2024-12-10 05:53:22.691071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.057 [2024-12-10 05:53:22.703293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.057 [2024-12-10 05:53:22.703578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-12-10 05:53:22.703594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.057 [2024-12-10 05:53:22.703602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.057 [2024-12-10 05:53:22.703774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.057 [2024-12-10 05:53:22.703951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.057 [2024-12-10 05:53:22.703959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.057 [2024-12-10 05:53:22.703966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.057 [2024-12-10 05:53:22.703972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.057 [2024-12-10 05:53:22.716242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.057 [2024-12-10 05:53:22.716523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-12-10 05:53:22.716539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.057 [2024-12-10 05:53:22.716546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.057 [2024-12-10 05:53:22.716731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.057 [2024-12-10 05:53:22.716904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.057 [2024-12-10 05:53:22.716912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.057 [2024-12-10 05:53:22.716919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.057 [2024-12-10 05:53:22.716925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.057 [2024-12-10 05:53:22.729322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.057 [2024-12-10 05:53:22.729757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-12-10 05:53:22.729800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.057 [2024-12-10 05:53:22.729823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.057 [2024-12-10 05:53:22.730422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.057 [2024-12-10 05:53:22.730591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.057 [2024-12-10 05:53:22.730599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.057 [2024-12-10 05:53:22.730605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.057 [2024-12-10 05:53:22.730611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.057 [2024-12-10 05:53:22.742163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.057 [2024-12-10 05:53:22.742562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-12-10 05:53:22.742578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.057 [2024-12-10 05:53:22.742586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.057 [2024-12-10 05:53:22.742754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.057 [2024-12-10 05:53:22.742921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.057 [2024-12-10 05:53:22.742929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.057 [2024-12-10 05:53:22.742935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.057 [2024-12-10 05:53:22.742941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.057 [2024-12-10 05:53:22.755155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.057 [2024-12-10 05:53:22.755562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-12-10 05:53:22.755578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.057 [2024-12-10 05:53:22.755585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.057 [2024-12-10 05:53:22.755752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.057 [2024-12-10 05:53:22.755924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.057 [2024-12-10 05:53:22.755932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.057 [2024-12-10 05:53:22.755938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.057 [2024-12-10 05:53:22.755944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.057 [2024-12-10 05:53:22.768083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.057 [2024-12-10 05:53:22.768465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-12-10 05:53:22.768517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.057 [2024-12-10 05:53:22.768540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.057 [2024-12-10 05:53:22.768996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.057 [2024-12-10 05:53:22.769164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.057 [2024-12-10 05:53:22.769178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.057 [2024-12-10 05:53:22.769184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.057 [2024-12-10 05:53:22.769190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.058 [2024-12-10 05:53:22.780960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.058 [2024-12-10 05:53:22.781289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-12-10 05:53:22.781305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.058 [2024-12-10 05:53:22.781312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.058 [2024-12-10 05:53:22.781479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.058 [2024-12-10 05:53:22.781646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.058 [2024-12-10 05:53:22.781653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.058 [2024-12-10 05:53:22.781660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.058 [2024-12-10 05:53:22.781665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.058 [2024-12-10 05:53:22.793818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.058 [2024-12-10 05:53:22.794254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-12-10 05:53:22.794271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.058 [2024-12-10 05:53:22.794278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.058 [2024-12-10 05:53:22.794446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.058 [2024-12-10 05:53:22.794613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.058 [2024-12-10 05:53:22.794621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.058 [2024-12-10 05:53:22.794627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.058 [2024-12-10 05:53:22.794633] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.058 [2024-12-10 05:53:22.806637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.058 [2024-12-10 05:53:22.807046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-12-10 05:53:22.807063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.058 [2024-12-10 05:53:22.807069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.058 [2024-12-10 05:53:22.807254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.058 [2024-12-10 05:53:22.807422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.058 [2024-12-10 05:53:22.807430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.058 [2024-12-10 05:53:22.807436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.058 [2024-12-10 05:53:22.807442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.058 [2024-12-10 05:53:22.819622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.058 [2024-12-10 05:53:22.819996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-12-10 05:53:22.820040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.058 [2024-12-10 05:53:22.820063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.058 [2024-12-10 05:53:22.820627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.058 [2024-12-10 05:53:22.820797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.058 [2024-12-10 05:53:22.820805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.058 [2024-12-10 05:53:22.820811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.058 [2024-12-10 05:53:22.820817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.058 [2024-12-10 05:53:22.832512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.058 [2024-12-10 05:53:22.832884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-12-10 05:53:22.832900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.058 [2024-12-10 05:53:22.832907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.058 [2024-12-10 05:53:22.833074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.058 [2024-12-10 05:53:22.833246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.058 [2024-12-10 05:53:22.833254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.058 [2024-12-10 05:53:22.833260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.058 [2024-12-10 05:53:22.833266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.058 [2024-12-10 05:53:22.845309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.058 [2024-12-10 05:53:22.845585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-12-10 05:53:22.845602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.058 [2024-12-10 05:53:22.845609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.058 [2024-12-10 05:53:22.845776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.058 [2024-12-10 05:53:22.845943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.058 [2024-12-10 05:53:22.845951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.058 [2024-12-10 05:53:22.845963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.058 [2024-12-10 05:53:22.845969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.058 [2024-12-10 05:53:22.858209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.058 [2024-12-10 05:53:22.858657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-12-10 05:53:22.858701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.058 [2024-12-10 05:53:22.858724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.058 [2024-12-10 05:53:22.859320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.058 [2024-12-10 05:53:22.859899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.058 [2024-12-10 05:53:22.859908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.058 [2024-12-10 05:53:22.859914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.058 [2024-12-10 05:53:22.859920] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.058 [2024-12-10 05:53:22.871292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.058 [2024-12-10 05:53:22.871576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-12-10 05:53:22.871593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.058 [2024-12-10 05:53:22.871600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.058 [2024-12-10 05:53:22.871768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.058 [2024-12-10 05:53:22.871936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.058 [2024-12-10 05:53:22.871944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.058 [2024-12-10 05:53:22.871950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.058 [2024-12-10 05:53:22.871956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.058 [2024-12-10 05:53:22.884309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.058 [2024-12-10 05:53:22.884602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-12-10 05:53:22.884618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.058 [2024-12-10 05:53:22.884626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.058 [2024-12-10 05:53:22.884798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.058 [2024-12-10 05:53:22.884970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.058 [2024-12-10 05:53:22.884979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.058 [2024-12-10 05:53:22.884986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.058 [2024-12-10 05:53:22.884991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.058 [2024-12-10 05:53:22.897257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.058 [2024-12-10 05:53:22.897615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-12-10 05:53:22.897632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.058 [2024-12-10 05:53:22.897639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.058 [2024-12-10 05:53:22.897806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.058 [2024-12-10 05:53:22.897974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.058 [2024-12-10 05:53:22.897982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.058 [2024-12-10 05:53:22.897988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.058 [2024-12-10 05:53:22.897994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.058 [2024-12-10 05:53:22.910185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.058 [2024-12-10 05:53:22.910531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-12-10 05:53:22.910547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.059 [2024-12-10 05:53:22.910554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.059 [2024-12-10 05:53:22.910722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.059 [2024-12-10 05:53:22.910889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.059 [2024-12-10 05:53:22.910897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.059 [2024-12-10 05:53:22.910903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.059 [2024-12-10 05:53:22.910908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.059 [2024-12-10 05:53:22.923134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.059 [2024-12-10 05:53:22.923595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-12-10 05:53:22.923612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.059 [2024-12-10 05:53:22.923619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.059 [2024-12-10 05:53:22.923786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.059 [2024-12-10 05:53:22.923953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.059 [2024-12-10 05:53:22.923961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.059 [2024-12-10 05:53:22.923967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.059 [2024-12-10 05:53:22.923973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.059 [2024-12-10 05:53:22.935940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.059 [2024-12-10 05:53:22.936325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-12-10 05:53:22.936345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.059 [2024-12-10 05:53:22.936352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.059 [2024-12-10 05:53:22.936511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.059 [2024-12-10 05:53:22.936669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.059 [2024-12-10 05:53:22.936676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.059 [2024-12-10 05:53:22.936682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.059 [2024-12-10 05:53:22.936688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.318 [2024-12-10 05:53:22.948827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.318 [2024-12-10 05:53:22.949252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.318 [2024-12-10 05:53:22.949270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.318 [2024-12-10 05:53:22.949277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.318 [2024-12-10 05:53:22.949450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.318 [2024-12-10 05:53:22.949622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.318 [2024-12-10 05:53:22.949631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.318 [2024-12-10 05:53:22.949638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.319 [2024-12-10 05:53:22.949644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.319 [2024-12-10 05:53:22.961845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.319 [2024-12-10 05:53:22.962248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.319 [2024-12-10 05:53:22.962265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.319 [2024-12-10 05:53:22.962273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.319 [2024-12-10 05:53:22.962446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.319 [2024-12-10 05:53:22.962618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.319 [2024-12-10 05:53:22.962626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.319 [2024-12-10 05:53:22.962633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.319 [2024-12-10 05:53:22.962639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.319 [2024-12-10 05:53:22.974751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.319 [2024-12-10 05:53:22.975141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.319 [2024-12-10 05:53:22.975158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.319 [2024-12-10 05:53:22.975171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.319 [2024-12-10 05:53:22.975363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.319 [2024-12-10 05:53:22.975536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.319 [2024-12-10 05:53:22.975544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.319 [2024-12-10 05:53:22.975550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.319 [2024-12-10 05:53:22.975556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.319 [2024-12-10 05:53:22.987510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.319 [2024-12-10 05:53:22.987827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.319 [2024-12-10 05:53:22.987843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.319 [2024-12-10 05:53:22.987850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.319 [2024-12-10 05:53:22.988008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.319 [2024-12-10 05:53:22.988172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.319 [2024-12-10 05:53:22.988180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.319 [2024-12-10 05:53:22.988185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.319 [2024-12-10 05:53:22.988191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.319 [2024-12-10 05:53:23.000361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.319 [2024-12-10 05:53:23.000764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.319 [2024-12-10 05:53:23.000809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.319 [2024-12-10 05:53:23.000832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.319 [2024-12-10 05:53:23.001365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.319 [2024-12-10 05:53:23.001535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.319 [2024-12-10 05:53:23.001543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.319 [2024-12-10 05:53:23.001549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.319 [2024-12-10 05:53:23.001555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.319 [2024-12-10 05:53:23.013228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.319 [2024-12-10 05:53:23.013648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.319 [2024-12-10 05:53:23.013664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.319 [2024-12-10 05:53:23.013671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.319 [2024-12-10 05:53:23.013837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.319 [2024-12-10 05:53:23.014005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.319 [2024-12-10 05:53:23.014013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.319 [2024-12-10 05:53:23.014022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.319 [2024-12-10 05:53:23.014029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.319 [2024-12-10 05:53:23.026057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.319 [2024-12-10 05:53:23.026407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.319 [2024-12-10 05:53:23.026453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.319 [2024-12-10 05:53:23.026477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.319 [2024-12-10 05:53:23.027056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.319 [2024-12-10 05:53:23.027230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.319 [2024-12-10 05:53:23.027238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.319 [2024-12-10 05:53:23.027244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.319 [2024-12-10 05:53:23.027251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.319 [2024-12-10 05:53:23.038913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.319 [2024-12-10 05:53:23.039327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.319 [2024-12-10 05:53:23.039344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.319 [2024-12-10 05:53:23.039351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.319 [2024-12-10 05:53:23.039518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.319 [2024-12-10 05:53:23.039686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.319 [2024-12-10 05:53:23.039693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.319 [2024-12-10 05:53:23.039700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.319 [2024-12-10 05:53:23.039705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.319 [2024-12-10 05:53:23.051734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.319 [2024-12-10 05:53:23.052131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.319 [2024-12-10 05:53:23.052187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.319 [2024-12-10 05:53:23.052210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.319 [2024-12-10 05:53:23.052792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.319 [2024-12-10 05:53:23.053386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.319 [2024-12-10 05:53:23.053412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.319 [2024-12-10 05:53:23.053432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.319 [2024-12-10 05:53:23.053451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.319 [2024-12-10 05:53:23.064532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.319 [2024-12-10 05:53:23.064942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.319 [2024-12-10 05:53:23.064958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.319 [2024-12-10 05:53:23.064965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.319 [2024-12-10 05:53:23.065132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.319 [2024-12-10 05:53:23.065305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.319 [2024-12-10 05:53:23.065314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.319 [2024-12-10 05:53:23.065320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.319 [2024-12-10 05:53:23.065326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.319 [2024-12-10 05:53:23.077388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.319 [2024-12-10 05:53:23.077809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.320 [2024-12-10 05:53:23.077854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.320 [2024-12-10 05:53:23.077877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.320 [2024-12-10 05:53:23.078314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.320 [2024-12-10 05:53:23.078482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.320 [2024-12-10 05:53:23.078490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.320 [2024-12-10 05:53:23.078496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.320 [2024-12-10 05:53:23.078502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.320 [2024-12-10 05:53:23.090175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.320 [2024-12-10 05:53:23.090567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.320 [2024-12-10 05:53:23.090611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.320 [2024-12-10 05:53:23.090634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.320 [2024-12-10 05:53:23.091231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.320 [2024-12-10 05:53:23.091717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.320 [2024-12-10 05:53:23.091725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.320 [2024-12-10 05:53:23.091731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.320 [2024-12-10 05:53:23.091737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.320 [2024-12-10 05:53:23.102975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.320 [2024-12-10 05:53:23.103315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.320 [2024-12-10 05:53:23.103368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.320 [2024-12-10 05:53:23.103391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.320 [2024-12-10 05:53:23.103973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.320 [2024-12-10 05:53:23.104572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.320 [2024-12-10 05:53:23.104599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.320 [2024-12-10 05:53:23.104622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.320 [2024-12-10 05:53:23.104628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.320 [2024-12-10 05:53:23.115701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.320 [2024-12-10 05:53:23.116115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.320 [2024-12-10 05:53:23.116131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.320 [2024-12-10 05:53:23.116138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.320 [2024-12-10 05:53:23.116311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.320 [2024-12-10 05:53:23.116480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.320 [2024-12-10 05:53:23.116487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.320 [2024-12-10 05:53:23.116494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.320 [2024-12-10 05:53:23.116499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.320 [2024-12-10 05:53:23.128462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.320 [2024-12-10 05:53:23.128855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.320 [2024-12-10 05:53:23.128870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.320 [2024-12-10 05:53:23.128877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.320 [2024-12-10 05:53:23.129036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.320 [2024-12-10 05:53:23.129216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.320 [2024-12-10 05:53:23.129225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.320 [2024-12-10 05:53:23.129232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.320 [2024-12-10 05:53:23.129237] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.320 [2024-12-10 05:53:23.141202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.320 [2024-12-10 05:53:23.141627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.320 [2024-12-10 05:53:23.141670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.320 [2024-12-10 05:53:23.141693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.320 [2024-12-10 05:53:23.142291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.320 [2024-12-10 05:53:23.142711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.320 [2024-12-10 05:53:23.142719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.320 [2024-12-10 05:53:23.142726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.320 [2024-12-10 05:53:23.142732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.320 [2024-12-10 05:53:23.153938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.320 [2024-12-10 05:53:23.154351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.320 [2024-12-10 05:53:23.154367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.320 [2024-12-10 05:53:23.154375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.320 [2024-12-10 05:53:23.154542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.320 [2024-12-10 05:53:23.154709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.320 [2024-12-10 05:53:23.154717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.320 [2024-12-10 05:53:23.154723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.320 [2024-12-10 05:53:23.154729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.320 [2024-12-10 05:53:23.166797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.320 [2024-12-10 05:53:23.167197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.320 [2024-12-10 05:53:23.167241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.320 [2024-12-10 05:53:23.167264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.320 [2024-12-10 05:53:23.167540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.320 [2024-12-10 05:53:23.167708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.320 [2024-12-10 05:53:23.167716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.320 [2024-12-10 05:53:23.167722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.320 [2024-12-10 05:53:23.167727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.320 [2024-12-10 05:53:23.179622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.320 [2024-12-10 05:53:23.180011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.320 [2024-12-10 05:53:23.180027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.320 [2024-12-10 05:53:23.180033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.320 [2024-12-10 05:53:23.180214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.320 [2024-12-10 05:53:23.180382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.320 [2024-12-10 05:53:23.180390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.320 [2024-12-10 05:53:23.180399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.320 [2024-12-10 05:53:23.180405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.320 [2024-12-10 05:53:23.192514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.320 [2024-12-10 05:53:23.192917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.320 [2024-12-10 05:53:23.192933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.320 [2024-12-10 05:53:23.192940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.320 [2024-12-10 05:53:23.193108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.320 [2024-12-10 05:53:23.193302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.320 [2024-12-10 05:53:23.193312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.320 [2024-12-10 05:53:23.193318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.320 [2024-12-10 05:53:23.193324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.320 [2024-12-10 05:53:23.205397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.320 [2024-12-10 05:53:23.205741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.320 [2024-12-10 05:53:23.205758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.321 [2024-12-10 05:53:23.205766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.321 [2024-12-10 05:53:23.205938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.321 [2024-12-10 05:53:23.206111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.321 [2024-12-10 05:53:23.206119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.321 [2024-12-10 05:53:23.206126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.321 [2024-12-10 05:53:23.206132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.581 [2024-12-10 05:53:23.218510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.581 [2024-12-10 05:53:23.218911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.581 [2024-12-10 05:53:23.218928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.581 [2024-12-10 05:53:23.218935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.581 [2024-12-10 05:53:23.219107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.581 [2024-12-10 05:53:23.219285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.581 [2024-12-10 05:53:23.219294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.581 [2024-12-10 05:53:23.219300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.581 [2024-12-10 05:53:23.219306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.581 [2024-12-10 05:53:23.231426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.581 [2024-12-10 05:53:23.231841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.581 [2024-12-10 05:53:23.231857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.581 [2024-12-10 05:53:23.231864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.581 [2024-12-10 05:53:23.232031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.581 [2024-12-10 05:53:23.232208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.581 [2024-12-10 05:53:23.232216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.581 [2024-12-10 05:53:23.232222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.581 [2024-12-10 05:53:23.232229] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.581 [2024-12-10 05:53:23.244269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.581 [2024-12-10 05:53:23.244654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.581 [2024-12-10 05:53:23.244670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.581 [2024-12-10 05:53:23.244676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.581 [2024-12-10 05:53:23.244835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.581 [2024-12-10 05:53:23.244993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.581 [2024-12-10 05:53:23.245001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.581 [2024-12-10 05:53:23.245007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.581 [2024-12-10 05:53:23.245013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.581 [2024-12-10 05:53:23.257172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.581 [2024-12-10 05:53:23.257581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.581 [2024-12-10 05:53:23.257625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.581 [2024-12-10 05:53:23.257647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.581 [2024-12-10 05:53:23.258153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.581 [2024-12-10 05:53:23.258345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.581 [2024-12-10 05:53:23.258354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.581 [2024-12-10 05:53:23.258360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.581 [2024-12-10 05:53:23.258366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.581 [2024-12-10 05:53:23.270091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.581 [2024-12-10 05:53:23.270509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.581 [2024-12-10 05:53:23.270554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.581 [2024-12-10 05:53:23.270584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.581 [2024-12-10 05:53:23.271038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.581 [2024-12-10 05:53:23.271212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.581 [2024-12-10 05:53:23.271220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.581 [2024-12-10 05:53:23.271226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.581 [2024-12-10 05:53:23.271233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.581 [2024-12-10 05:53:23.283016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.581 [2024-12-10 05:53:23.283416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.581 [2024-12-10 05:53:23.283432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.581 [2024-12-10 05:53:23.283439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.581 [2024-12-10 05:53:23.283606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.581 [2024-12-10 05:53:23.283773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.581 [2024-12-10 05:53:23.283781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.581 [2024-12-10 05:53:23.283787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.581 [2024-12-10 05:53:23.283793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.581 [2024-12-10 05:53:23.295816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.581 [2024-12-10 05:53:23.296225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.581 [2024-12-10 05:53:23.296242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.581 [2024-12-10 05:53:23.296249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.581 [2024-12-10 05:53:23.296424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.581 [2024-12-10 05:53:23.296583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.581 [2024-12-10 05:53:23.296591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.581 [2024-12-10 05:53:23.296597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.581 [2024-12-10 05:53:23.296602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.581 [2024-12-10 05:53:23.308578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.581 [2024-12-10 05:53:23.308972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.581 [2024-12-10 05:53:23.308988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.581 [2024-12-10 05:53:23.308995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.581 [2024-12-10 05:53:23.309154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.581 [2024-12-10 05:53:23.309344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.581 [2024-12-10 05:53:23.309353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.581 [2024-12-10 05:53:23.309359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.581 [2024-12-10 05:53:23.309365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.581 [2024-12-10 05:53:23.321377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.581 [2024-12-10 05:53:23.321785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.581 [2024-12-10 05:53:23.321802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.581 [2024-12-10 05:53:23.321808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.581 [2024-12-10 05:53:23.321966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.581 [2024-12-10 05:53:23.322125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.581 [2024-12-10 05:53:23.322132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.581 [2024-12-10 05:53:23.322138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.581 [2024-12-10 05:53:23.322144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.581 [2024-12-10 05:53:23.334235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.581 [2024-12-10 05:53:23.334576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.581 [2024-12-10 05:53:23.334593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.581 [2024-12-10 05:53:23.334601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.581 [2024-12-10 05:53:23.334768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.581 [2024-12-10 05:53:23.334935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.582 [2024-12-10 05:53:23.334944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.582 [2024-12-10 05:53:23.334950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.582 [2024-12-10 05:53:23.334956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.582 [2024-12-10 05:53:23.347112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.582 [2024-12-10 05:53:23.347448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.582 [2024-12-10 05:53:23.347464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.582 [2024-12-10 05:53:23.347471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.582 [2024-12-10 05:53:23.347638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.582 [2024-12-10 05:53:23.347805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.582 [2024-12-10 05:53:23.347813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.582 [2024-12-10 05:53:23.347823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.582 [2024-12-10 05:53:23.347829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.582 [2024-12-10 05:53:23.360086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.582 [2024-12-10 05:53:23.360427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.582 [2024-12-10 05:53:23.360444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.582 [2024-12-10 05:53:23.360451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.582 [2024-12-10 05:53:23.360623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.582 [2024-12-10 05:53:23.360796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.582 [2024-12-10 05:53:23.360804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.582 [2024-12-10 05:53:23.360811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.582 [2024-12-10 05:53:23.360817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.582 [2024-12-10 05:53:23.373043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.582 [2024-12-10 05:53:23.373397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.582 [2024-12-10 05:53:23.373414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.582 [2024-12-10 05:53:23.373421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.582 [2024-12-10 05:53:23.373588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.582 [2024-12-10 05:53:23.373755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.582 [2024-12-10 05:53:23.373763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.582 [2024-12-10 05:53:23.373770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.582 [2024-12-10 05:53:23.373775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.582 [2024-12-10 05:53:23.386025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.582 [2024-12-10 05:53:23.386453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.582 [2024-12-10 05:53:23.386469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.582 [2024-12-10 05:53:23.386476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.582 [2024-12-10 05:53:23.386644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.582 [2024-12-10 05:53:23.386811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.582 [2024-12-10 05:53:23.386819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.582 [2024-12-10 05:53:23.386826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.582 [2024-12-10 05:53:23.386832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.582 [2024-12-10 05:53:23.398981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.582 [2024-12-10 05:53:23.399419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.582 [2024-12-10 05:53:23.399437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.582 [2024-12-10 05:53:23.399444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.582 [2024-12-10 05:53:23.399628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.582 [2024-12-10 05:53:23.399800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.582 [2024-12-10 05:53:23.399808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.582 [2024-12-10 05:53:23.399814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.582 [2024-12-10 05:53:23.399820] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.582 [2024-12-10 05:53:23.412020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.582 [2024-12-10 05:53:23.412464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.582 [2024-12-10 05:53:23.412510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.582 [2024-12-10 05:53:23.412533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.582 [2024-12-10 05:53:23.413113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.582 [2024-12-10 05:53:23.413294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.582 [2024-12-10 05:53:23.413303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.582 [2024-12-10 05:53:23.413309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.582 [2024-12-10 05:53:23.413315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.582 [2024-12-10 05:53:23.425001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.582 [2024-12-10 05:53:23.425433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.582 [2024-12-10 05:53:23.425449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.582 [2024-12-10 05:53:23.425456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.582 [2024-12-10 05:53:23.425623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.582 [2024-12-10 05:53:23.425790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.582 [2024-12-10 05:53:23.425798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.582 [2024-12-10 05:53:23.425804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.582 [2024-12-10 05:53:23.425810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.582 [2024-12-10 05:53:23.437816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.582 [2024-12-10 05:53:23.438254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.582 [2024-12-10 05:53:23.438271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.582 [2024-12-10 05:53:23.438281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.582 [2024-12-10 05:53:23.438449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.582 [2024-12-10 05:53:23.438616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.582 [2024-12-10 05:53:23.438624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.582 [2024-12-10 05:53:23.438630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.582 [2024-12-10 05:53:23.438636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.582 [2024-12-10 05:53:23.450585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.582 [2024-12-10 05:53:23.450998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.582 [2024-12-10 05:53:23.451041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.582 [2024-12-10 05:53:23.451063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.582 [2024-12-10 05:53:23.451538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.582 [2024-12-10 05:53:23.451706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.582 [2024-12-10 05:53:23.451714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.582 [2024-12-10 05:53:23.451720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.582 [2024-12-10 05:53:23.451726] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.582 [2024-12-10 05:53:23.463371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.582 [2024-12-10 05:53:23.463802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.582 [2024-12-10 05:53:23.463819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.582 [2024-12-10 05:53:23.463826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.582 [2024-12-10 05:53:23.463994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.582 [2024-12-10 05:53:23.464170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.582 [2024-12-10 05:53:23.464179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.582 [2024-12-10 05:53:23.464185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.583 [2024-12-10 05:53:23.464207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.842 [2024-12-10 05:53:23.476400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.842 [2024-12-10 05:53:23.476827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.842 [2024-12-10 05:53:23.476844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.842 [2024-12-10 05:53:23.476851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.842 [2024-12-10 05:53:23.477023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.842 [2024-12-10 05:53:23.477207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.842 [2024-12-10 05:53:23.477216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.842 [2024-12-10 05:53:23.477222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.842 [2024-12-10 05:53:23.477228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.842 [2024-12-10 05:53:23.489379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.842 [2024-12-10 05:53:23.489807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.842 [2024-12-10 05:53:23.489824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.842 [2024-12-10 05:53:23.489831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.842 [2024-12-10 05:53:23.489998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.842 [2024-12-10 05:53:23.490172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.842 [2024-12-10 05:53:23.490181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.842 [2024-12-10 05:53:23.490187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.842 [2024-12-10 05:53:23.490193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.843 [2024-12-10 05:53:23.502108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.843 [2024-12-10 05:53:23.502503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.843 [2024-12-10 05:53:23.502519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.843 [2024-12-10 05:53:23.502526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.843 [2024-12-10 05:53:23.502684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.843 [2024-12-10 05:53:23.502843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.843 [2024-12-10 05:53:23.502850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.843 [2024-12-10 05:53:23.502856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.843 [2024-12-10 05:53:23.502862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.843 [2024-12-10 05:53:23.514917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.843 [2024-12-10 05:53:23.515337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.843 [2024-12-10 05:53:23.515354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.843 [2024-12-10 05:53:23.515360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.843 [2024-12-10 05:53:23.515519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.843 [2024-12-10 05:53:23.515678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.843 [2024-12-10 05:53:23.515685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.843 [2024-12-10 05:53:23.515695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.843 [2024-12-10 05:53:23.515701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.843 [2024-12-10 05:53:23.527711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.843 [2024-12-10 05:53:23.528060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.843 [2024-12-10 05:53:23.528076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.843 [2024-12-10 05:53:23.528083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.843 [2024-12-10 05:53:23.528266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.843 [2024-12-10 05:53:23.528434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.843 [2024-12-10 05:53:23.528442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.843 [2024-12-10 05:53:23.528448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.843 [2024-12-10 05:53:23.528454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.843 [2024-12-10 05:53:23.540474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.843 [2024-12-10 05:53:23.540884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.843 [2024-12-10 05:53:23.540900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.843 [2024-12-10 05:53:23.540907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.843 [2024-12-10 05:53:23.541065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.843 [2024-12-10 05:53:23.541247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.843 [2024-12-10 05:53:23.541256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.843 [2024-12-10 05:53:23.541262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.843 [2024-12-10 05:53:23.541268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.843 [2024-12-10 05:53:23.553286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.843 [2024-12-10 05:53:23.553702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.843 [2024-12-10 05:53:23.553718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.843 [2024-12-10 05:53:23.553724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.843 [2024-12-10 05:53:23.553882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.843 [2024-12-10 05:53:23.554040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.843 [2024-12-10 05:53:23.554047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.843 [2024-12-10 05:53:23.554053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.843 [2024-12-10 05:53:23.554059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.843 [2024-12-10 05:53:23.566032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.843 [2024-12-10 05:53:23.566482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.843 [2024-12-10 05:53:23.566527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.843 [2024-12-10 05:53:23.566550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.843 [2024-12-10 05:53:23.567131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.843 [2024-12-10 05:53:23.567663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.843 [2024-12-10 05:53:23.567671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.843 [2024-12-10 05:53:23.567678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.843 [2024-12-10 05:53:23.567684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.843 [2024-12-10 05:53:23.578915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.843 [2024-12-10 05:53:23.579250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.843 [2024-12-10 05:53:23.579266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.843 [2024-12-10 05:53:23.579272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.843 [2024-12-10 05:53:23.579431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.843 [2024-12-10 05:53:23.579590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.843 [2024-12-10 05:53:23.579597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.843 [2024-12-10 05:53:23.579603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.843 [2024-12-10 05:53:23.579609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.843 5878.40 IOPS, 22.96 MiB/s [2024-12-10T04:53:23.739Z] [2024-12-10 05:53:23.591749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.843 [2024-12-10 05:53:23.592153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.843 [2024-12-10 05:53:23.592211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.843 [2024-12-10 05:53:23.592234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.843 [2024-12-10 05:53:23.592707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.843 [2024-12-10 05:53:23.592876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.843 [2024-12-10 05:53:23.592886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.843 [2024-12-10 05:53:23.592893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.843 [2024-12-10 05:53:23.592899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.843 [2024-12-10 05:53:23.604604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.843 [2024-12-10 05:53:23.604931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.843 [2024-12-10 05:53:23.604951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.843 [2024-12-10 05:53:23.604958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.843 [2024-12-10 05:53:23.605125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.843 [2024-12-10 05:53:23.605301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.843 [2024-12-10 05:53:23.605310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.843 [2024-12-10 05:53:23.605316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.843 [2024-12-10 05:53:23.605322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.843 [2024-12-10 05:53:23.617449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.843 [2024-12-10 05:53:23.617859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.843 [2024-12-10 05:53:23.617903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.843 [2024-12-10 05:53:23.617926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.843 [2024-12-10 05:53:23.618522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.843 [2024-12-10 05:53:23.618950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.843 [2024-12-10 05:53:23.618958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.843 [2024-12-10 05:53:23.618965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.843 [2024-12-10 05:53:23.618971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.844 [2024-12-10 05:53:23.630203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.844 [2024-12-10 05:53:23.630543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.844 [2024-12-10 05:53:23.630559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.844 [2024-12-10 05:53:23.630566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.844 [2024-12-10 05:53:23.630723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.844 [2024-12-10 05:53:23.630881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.844 [2024-12-10 05:53:23.630889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.844 [2024-12-10 05:53:23.630894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.844 [2024-12-10 05:53:23.630900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.844 [2024-12-10 05:53:23.642971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.844 [2024-12-10 05:53:23.643362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.844 [2024-12-10 05:53:23.643378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.844 [2024-12-10 05:53:23.643385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.844 [2024-12-10 05:53:23.643546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.844 [2024-12-10 05:53:23.643705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.844 [2024-12-10 05:53:23.643712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.844 [2024-12-10 05:53:23.643718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.844 [2024-12-10 05:53:23.643724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.844 [2024-12-10 05:53:23.655695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.844 [2024-12-10 05:53:23.656109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.844 [2024-12-10 05:53:23.656153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.844 [2024-12-10 05:53:23.656190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.844 [2024-12-10 05:53:23.656725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.844 [2024-12-10 05:53:23.656892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.844 [2024-12-10 05:53:23.656900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.844 [2024-12-10 05:53:23.656906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.844 [2024-12-10 05:53:23.656912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.844 [2024-12-10 05:53:23.668474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.844 [2024-12-10 05:53:23.668893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.844 [2024-12-10 05:53:23.668909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.844 [2024-12-10 05:53:23.668916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.844 [2024-12-10 05:53:23.669074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.844 [2024-12-10 05:53:23.669256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.844 [2024-12-10 05:53:23.669265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.844 [2024-12-10 05:53:23.669271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.844 [2024-12-10 05:53:23.669277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.844 [2024-12-10 05:53:23.681284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.844 [2024-12-10 05:53:23.681691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.844 [2024-12-10 05:53:23.681707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.844 [2024-12-10 05:53:23.681714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.844 [2024-12-10 05:53:23.681872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.844 [2024-12-10 05:53:23.682030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.844 [2024-12-10 05:53:23.682038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.844 [2024-12-10 05:53:23.682047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.844 [2024-12-10 05:53:23.682053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.844 [2024-12-10 05:53:23.694188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.844 [2024-12-10 05:53:23.694647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.844 [2024-12-10 05:53:23.694692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.844 [2024-12-10 05:53:23.694715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.844 [2024-12-10 05:53:23.695124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.844 [2024-12-10 05:53:23.695297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.844 [2024-12-10 05:53:23.695306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.844 [2024-12-10 05:53:23.695312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.844 [2024-12-10 05:53:23.695319] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.844 [2024-12-10 05:53:23.707044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.844 [2024-12-10 05:53:23.707479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.844 [2024-12-10 05:53:23.707524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.844 [2024-12-10 05:53:23.707547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.844 [2024-12-10 05:53:23.708099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.844 [2024-12-10 05:53:23.708272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.844 [2024-12-10 05:53:23.708280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.844 [2024-12-10 05:53:23.708287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.844 [2024-12-10 05:53:23.708293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.844 [2024-12-10 05:53:23.719813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.844 [2024-12-10 05:53:23.720249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.844 [2024-12-10 05:53:23.720266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:35.844 [2024-12-10 05:53:23.720273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:35.844 [2024-12-10 05:53:23.720445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:35.844 [2024-12-10 05:53:23.720619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.844 [2024-12-10 05:53:23.720627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.844 [2024-12-10 05:53:23.720634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.844 [2024-12-10 05:53:23.720640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.104 [2024-12-10 05:53:23.732839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.104 [2024-12-10 05:53:23.733270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.104 [2024-12-10 05:53:23.733288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.104 [2024-12-10 05:53:23.733295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.104 [2024-12-10 05:53:23.733468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.104 [2024-12-10 05:53:23.733641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.104 [2024-12-10 05:53:23.733649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.104 [2024-12-10 05:53:23.733655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.104 [2024-12-10 05:53:23.733662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.104 [2024-12-10 05:53:23.745762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.104 [2024-12-10 05:53:23.746111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.104 [2024-12-10 05:53:23.746155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.104 [2024-12-10 05:53:23.746194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.104 [2024-12-10 05:53:23.746776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.104 [2024-12-10 05:53:23.747030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.104 [2024-12-10 05:53:23.747038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.104 [2024-12-10 05:53:23.747045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.104 [2024-12-10 05:53:23.747051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.104 [2024-12-10 05:53:23.758556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.104 [2024-12-10 05:53:23.758968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.104 [2024-12-10 05:53:23.758984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.104 [2024-12-10 05:53:23.758991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.104 [2024-12-10 05:53:23.759149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.104 [2024-12-10 05:53:23.759335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.104 [2024-12-10 05:53:23.759344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.104 [2024-12-10 05:53:23.759350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.104 [2024-12-10 05:53:23.759356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.104 [2024-12-10 05:53:23.771278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.104 [2024-12-10 05:53:23.771680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.104 [2024-12-10 05:53:23.771732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.104 [2024-12-10 05:53:23.771755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.104 [2024-12-10 05:53:23.772248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.104 [2024-12-10 05:53:23.772416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.104 [2024-12-10 05:53:23.772424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.104 [2024-12-10 05:53:23.772430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.104 [2024-12-10 05:53:23.772436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.104 [2024-12-10 05:53:23.784067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.104 [2024-12-10 05:53:23.784510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.105 [2024-12-10 05:53:23.784526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.105 [2024-12-10 05:53:23.784533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.105 [2024-12-10 05:53:23.784700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.105 [2024-12-10 05:53:23.784867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.105 [2024-12-10 05:53:23.784875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.105 [2024-12-10 05:53:23.784881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.105 [2024-12-10 05:53:23.784887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.105 [2024-12-10 05:53:23.796918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.105 [2024-12-10 05:53:23.797342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.105 [2024-12-10 05:53:23.797388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.105 [2024-12-10 05:53:23.797411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.105 [2024-12-10 05:53:23.797908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.105 [2024-12-10 05:53:23.798067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.105 [2024-12-10 05:53:23.798075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.105 [2024-12-10 05:53:23.798081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.105 [2024-12-10 05:53:23.798086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.105 [2024-12-10 05:53:23.809766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.105 [2024-12-10 05:53:23.810187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.105 [2024-12-10 05:53:23.810204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.105 [2024-12-10 05:53:23.810210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.105 [2024-12-10 05:53:23.810371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.105 [2024-12-10 05:53:23.810530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.105 [2024-12-10 05:53:23.810537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.105 [2024-12-10 05:53:23.810542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.105 [2024-12-10 05:53:23.810548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.105 [2024-12-10 05:53:23.822582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.105 [2024-12-10 05:53:23.822993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.105 [2024-12-10 05:53:23.823008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.105 [2024-12-10 05:53:23.823015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.105 [2024-12-10 05:53:23.823179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.105 [2024-12-10 05:53:23.823381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.105 [2024-12-10 05:53:23.823389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.105 [2024-12-10 05:53:23.823396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.105 [2024-12-10 05:53:23.823402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.105 [2024-12-10 05:53:23.835332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.105 [2024-12-10 05:53:23.835719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.105 [2024-12-10 05:53:23.835735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.105 [2024-12-10 05:53:23.835741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.105 [2024-12-10 05:53:23.835899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.105 [2024-12-10 05:53:23.836058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.105 [2024-12-10 05:53:23.836065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.105 [2024-12-10 05:53:23.836071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.105 [2024-12-10 05:53:23.836077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.105 [2024-12-10 05:53:23.848107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.105 [2024-12-10 05:53:23.848551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.105 [2024-12-10 05:53:23.848568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.105 [2024-12-10 05:53:23.848575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.105 [2024-12-10 05:53:23.848742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.105 [2024-12-10 05:53:23.848910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.105 [2024-12-10 05:53:23.848918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.105 [2024-12-10 05:53:23.848927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.105 [2024-12-10 05:53:23.848933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.105 [2024-12-10 05:53:23.861019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.105 [2024-12-10 05:53:23.861462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.105 [2024-12-10 05:53:23.861507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.105 [2024-12-10 05:53:23.861530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.105 [2024-12-10 05:53:23.862110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.105 [2024-12-10 05:53:23.862352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.105 [2024-12-10 05:53:23.862361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.105 [2024-12-10 05:53:23.862367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.105 [2024-12-10 05:53:23.862373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.105 [2024-12-10 05:53:23.873849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.105 [2024-12-10 05:53:23.874245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.105 [2024-12-10 05:53:23.874262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.105 [2024-12-10 05:53:23.874270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.105 [2024-12-10 05:53:23.874429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.105 [2024-12-10 05:53:23.874587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.105 [2024-12-10 05:53:23.874595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.105 [2024-12-10 05:53:23.874602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.105 [2024-12-10 05:53:23.874607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.105 [2024-12-10 05:53:23.886710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.105 [2024-12-10 05:53:23.887102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.105 [2024-12-10 05:53:23.887118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.105 [2024-12-10 05:53:23.887125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.105 [2024-12-10 05:53:23.887310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.105 [2024-12-10 05:53:23.887478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.105 [2024-12-10 05:53:23.887486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.105 [2024-12-10 05:53:23.887492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.105 [2024-12-10 05:53:23.887498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.105 [2024-12-10 05:53:23.899510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.105 [2024-12-10 05:53:23.899939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.105 [2024-12-10 05:53:23.899983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.105 [2024-12-10 05:53:23.900006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.105 [2024-12-10 05:53:23.900548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.105 [2024-12-10 05:53:23.900716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.105 [2024-12-10 05:53:23.900724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.105 [2024-12-10 05:53:23.900730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.105 [2024-12-10 05:53:23.900736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.105 [2024-12-10 05:53:23.912442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.105 [2024-12-10 05:53:23.912878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.106 [2024-12-10 05:53:23.912922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.106 [2024-12-10 05:53:23.912945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.106 [2024-12-10 05:53:23.913408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.106 [2024-12-10 05:53:23.913577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.106 [2024-12-10 05:53:23.913585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.106 [2024-12-10 05:53:23.913591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.106 [2024-12-10 05:53:23.913597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.106 [2024-12-10 05:53:23.925380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.106 [2024-12-10 05:53:23.925751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.106 [2024-12-10 05:53:23.925766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.106 [2024-12-10 05:53:23.925773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.106 [2024-12-10 05:53:23.925931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.106 [2024-12-10 05:53:23.926089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.106 [2024-12-10 05:53:23.926097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.106 [2024-12-10 05:53:23.926103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.106 [2024-12-10 05:53:23.926108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.106 [2024-12-10 05:53:23.938286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.106 [2024-12-10 05:53:23.938624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.106 [2024-12-10 05:53:23.938644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.106 [2024-12-10 05:53:23.938651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.106 [2024-12-10 05:53:23.938818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.106 [2024-12-10 05:53:23.938985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.106 [2024-12-10 05:53:23.938994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.106 [2024-12-10 05:53:23.939002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.106 [2024-12-10 05:53:23.939010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.106 [2024-12-10 05:53:23.951313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.106 [2024-12-10 05:53:23.951662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.106 [2024-12-10 05:53:23.951678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.106 [2024-12-10 05:53:23.951685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.106 [2024-12-10 05:53:23.951852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.106 [2024-12-10 05:53:23.952019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.106 [2024-12-10 05:53:23.952027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.106 [2024-12-10 05:53:23.952033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.106 [2024-12-10 05:53:23.952039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.106 [2024-12-10 05:53:23.964170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.106 [2024-12-10 05:53:23.964521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.106 [2024-12-10 05:53:23.964537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.106 [2024-12-10 05:53:23.964544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.106 [2024-12-10 05:53:23.964711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.106 [2024-12-10 05:53:23.964878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.106 [2024-12-10 05:53:23.964886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.106 [2024-12-10 05:53:23.964892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.106 [2024-12-10 05:53:23.964898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.106 [2024-12-10 05:53:23.977026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.106 [2024-12-10 05:53:23.977401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.106 [2024-12-10 05:53:23.977418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.106 [2024-12-10 05:53:23.977425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.106 [2024-12-10 05:53:23.977607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.106 [2024-12-10 05:53:23.977781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.106 [2024-12-10 05:53:23.977789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.106 [2024-12-10 05:53:23.977796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.106 [2024-12-10 05:53:23.977802] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.106 [2024-12-10 05:53:23.990033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.106 [2024-12-10 05:53:23.990394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.106 [2024-12-10 05:53:23.990410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.106 [2024-12-10 05:53:23.990417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.106 [2024-12-10 05:53:23.990590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.106 [2024-12-10 05:53:23.990762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.106 [2024-12-10 05:53:23.990770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.106 [2024-12-10 05:53:23.990776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.106 [2024-12-10 05:53:23.990782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.366 [2024-12-10 05:53:24.003064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.366 [2024-12-10 05:53:24.003411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.366 [2024-12-10 05:53:24.003428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.366 [2024-12-10 05:53:24.003435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.366 [2024-12-10 05:53:24.003607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.366 [2024-12-10 05:53:24.003780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.366 [2024-12-10 05:53:24.003788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.366 [2024-12-10 05:53:24.003794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.366 [2024-12-10 05:53:24.003800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.366 [2024-12-10 05:53:24.016043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.366 [2024-12-10 05:53:24.016396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.366 [2024-12-10 05:53:24.016423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.366 [2024-12-10 05:53:24.016431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.366 [2024-12-10 05:53:24.016599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.366 [2024-12-10 05:53:24.016766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.366 [2024-12-10 05:53:24.016774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.366 [2024-12-10 05:53:24.016784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.366 [2024-12-10 05:53:24.016790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.366 [2024-12-10 05:53:24.028850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.366 [2024-12-10 05:53:24.029134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.366 [2024-12-10 05:53:24.029150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.366 [2024-12-10 05:53:24.029157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.366 [2024-12-10 05:53:24.029330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.366 [2024-12-10 05:53:24.029497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.366 [2024-12-10 05:53:24.029505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.366 [2024-12-10 05:53:24.029511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.366 [2024-12-10 05:53:24.029517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.366 [2024-12-10 05:53:24.041657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.366 [2024-12-10 05:53:24.041920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.366 [2024-12-10 05:53:24.041936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.366 [2024-12-10 05:53:24.041943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.366 [2024-12-10 05:53:24.042110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.366 [2024-12-10 05:53:24.042283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.366 [2024-12-10 05:53:24.042292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.366 [2024-12-10 05:53:24.042298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.366 [2024-12-10 05:53:24.042303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.366 [2024-12-10 05:53:24.054492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.366 [2024-12-10 05:53:24.054772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.366 [2024-12-10 05:53:24.054788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.366 [2024-12-10 05:53:24.054795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.366 [2024-12-10 05:53:24.054962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.366 [2024-12-10 05:53:24.055130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.366 [2024-12-10 05:53:24.055138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.366 [2024-12-10 05:53:24.055144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.366 [2024-12-10 05:53:24.055150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.366 [2024-12-10 05:53:24.067340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.366 [2024-12-10 05:53:24.067738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.366 [2024-12-10 05:53:24.067782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.366 [2024-12-10 05:53:24.067804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.366 [2024-12-10 05:53:24.068400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.366 [2024-12-10 05:53:24.068909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.367 [2024-12-10 05:53:24.068917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.367 [2024-12-10 05:53:24.068923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.367 [2024-12-10 05:53:24.068929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.367 [2024-12-10 05:53:24.080307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.367 [2024-12-10 05:53:24.080669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.367 [2024-12-10 05:53:24.080715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.367 [2024-12-10 05:53:24.080738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.367 [2024-12-10 05:53:24.081336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.367 [2024-12-10 05:53:24.081920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.367 [2024-12-10 05:53:24.081928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.367 [2024-12-10 05:53:24.081934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.367 [2024-12-10 05:53:24.081940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.367 [2024-12-10 05:53:24.093232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.367 [2024-12-10 05:53:24.093575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.367 [2024-12-10 05:53:24.093591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.367 [2024-12-10 05:53:24.093598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.367 [2024-12-10 05:53:24.093766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.367 [2024-12-10 05:53:24.093935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.367 [2024-12-10 05:53:24.093943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.367 [2024-12-10 05:53:24.093948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.367 [2024-12-10 05:53:24.093954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.367 [2024-12-10 05:53:24.106122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.367 [2024-12-10 05:53:24.106432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.367 [2024-12-10 05:53:24.106489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.367 [2024-12-10 05:53:24.106513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.367 [2024-12-10 05:53:24.107044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.367 [2024-12-10 05:53:24.107223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.367 [2024-12-10 05:53:24.107232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.367 [2024-12-10 05:53:24.107238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.367 [2024-12-10 05:53:24.107244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.367 [2024-12-10 05:53:24.118955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.367 [2024-12-10 05:53:24.119240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.367 [2024-12-10 05:53:24.119258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.367 [2024-12-10 05:53:24.119266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.367 [2024-12-10 05:53:24.119439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.367 [2024-12-10 05:53:24.119613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.367 [2024-12-10 05:53:24.119621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.367 [2024-12-10 05:53:24.119628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.367 [2024-12-10 05:53:24.119634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.367 [2024-12-10 05:53:24.131766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.367 [2024-12-10 05:53:24.132115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.367 [2024-12-10 05:53:24.132131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.367 [2024-12-10 05:53:24.132139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.367 [2024-12-10 05:53:24.132311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.367 [2024-12-10 05:53:24.132480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.367 [2024-12-10 05:53:24.132487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.367 [2024-12-10 05:53:24.132493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.367 [2024-12-10 05:53:24.132499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1344522 Killed "${NVMF_APP[@]}" "$@" 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.367 [2024-12-10 05:53:24.144714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.367 [2024-12-10 05:53:24.145069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.367 [2024-12-10 05:53:24.145086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.367 [2024-12-10 05:53:24.145093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.367 [2024-12-10 05:53:24.145271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.367 [2024-12-10 05:53:24.145444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.367 [2024-12-10 05:53:24.145452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.367 [2024-12-10 05:53:24.145459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.367 [2024-12-10 05:53:24.145464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1345805 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1345805 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1345805 ']' 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.367 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.367 [2024-12-10 05:53:24.157823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.367 [2024-12-10 05:53:24.158154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.367 [2024-12-10 05:53:24.158176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.367 [2024-12-10 05:53:24.158183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.367 [2024-12-10 05:53:24.158356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.367 [2024-12-10 05:53:24.158528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.367 [2024-12-10 05:53:24.158537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.367 [2024-12-10 05:53:24.158543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.367 [2024-12-10 05:53:24.158549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.367 [2024-12-10 05:53:24.170907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.367 [2024-12-10 05:53:24.171181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.367 [2024-12-10 05:53:24.171198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.367 [2024-12-10 05:53:24.171208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.367 [2024-12-10 05:53:24.171381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.367 [2024-12-10 05:53:24.171554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.367 [2024-12-10 05:53:24.171562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.367 [2024-12-10 05:53:24.171569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.367 [2024-12-10 05:53:24.171575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.367 [2024-12-10 05:53:24.183918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.367 [2024-12-10 05:53:24.184250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.367 [2024-12-10 05:53:24.184267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.368 [2024-12-10 05:53:24.184274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.368 [2024-12-10 05:53:24.184447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.368 [2024-12-10 05:53:24.184621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.368 [2024-12-10 05:53:24.184629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.368 [2024-12-10 05:53:24.184635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.368 [2024-12-10 05:53:24.184641] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.368 [2024-12-10 05:53:24.196888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.368 [2024-12-10 05:53:24.197257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.368 [2024-12-10 05:53:24.197275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.368 [2024-12-10 05:53:24.197282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.368 [2024-12-10 05:53:24.197455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.368 [2024-12-10 05:53:24.197632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.368 [2024-12-10 05:53:24.197640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.368 [2024-12-10 05:53:24.197647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.368 [2024-12-10 05:53:24.197653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.368 [2024-12-10 05:53:24.199912] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:36.368 [2024-12-10 05:53:24.199950] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.368 [2024-12-10 05:53:24.209966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.368 [2024-12-10 05:53:24.210260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.368 [2024-12-10 05:53:24.210278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.368 [2024-12-10 05:53:24.210289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.368 [2024-12-10 05:53:24.210462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.368 [2024-12-10 05:53:24.210635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.368 [2024-12-10 05:53:24.210643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.368 [2024-12-10 05:53:24.210649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.368 [2024-12-10 05:53:24.210656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.368 [2024-12-10 05:53:24.222972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.368 [2024-12-10 05:53:24.223367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.368 [2024-12-10 05:53:24.223384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.368 [2024-12-10 05:53:24.223391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.368 [2024-12-10 05:53:24.223564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.368 [2024-12-10 05:53:24.223740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.368 [2024-12-10 05:53:24.223748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.368 [2024-12-10 05:53:24.223755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.368 [2024-12-10 05:53:24.223761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.368 [2024-12-10 05:53:24.235974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.368 [2024-12-10 05:53:24.236318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.368 [2024-12-10 05:53:24.236335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.368 [2024-12-10 05:53:24.236342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.368 [2024-12-10 05:53:24.236515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.368 [2024-12-10 05:53:24.236688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.368 [2024-12-10 05:53:24.236697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.368 [2024-12-10 05:53:24.236704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.368 [2024-12-10 05:53:24.236710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.368 [2024-12-10 05:53:24.249076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.368 [2024-12-10 05:53:24.249412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.368 [2024-12-10 05:53:24.249429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.368 [2024-12-10 05:53:24.249436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.368 [2024-12-10 05:53:24.249608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.368 [2024-12-10 05:53:24.249784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.368 [2024-12-10 05:53:24.249792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.368 [2024-12-10 05:53:24.249799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.368 [2024-12-10 05:53:24.249805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.628 [2024-12-10 05:53:24.262171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.628 [2024-12-10 05:53:24.262581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.628 [2024-12-10 05:53:24.262597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.628 [2024-12-10 05:53:24.262605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.628 [2024-12-10 05:53:24.262777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.628 [2024-12-10 05:53:24.262950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.628 [2024-12-10 05:53:24.262959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.628 [2024-12-10 05:53:24.262965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.628 [2024-12-10 05:53:24.262972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.628 [2024-12-10 05:53:24.275123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.628 [2024-12-10 05:53:24.275409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.628 [2024-12-10 05:53:24.275426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.628 [2024-12-10 05:53:24.275433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.628 [2024-12-10 05:53:24.275607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.628 [2024-12-10 05:53:24.275780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.628 [2024-12-10 05:53:24.275788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.628 [2024-12-10 05:53:24.275794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.628 [2024-12-10 05:53:24.275801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.628 [2024-12-10 05:53:24.279763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:36.628 [2024-12-10 05:53:24.288172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.628 [2024-12-10 05:53:24.288595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.628 [2024-12-10 05:53:24.288614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.628 [2024-12-10 05:53:24.288622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.628 [2024-12-10 05:53:24.288795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.628 [2024-12-10 05:53:24.288969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.628 [2024-12-10 05:53:24.288977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.628 [2024-12-10 05:53:24.288989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.628 [2024-12-10 05:53:24.288995] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.628 [2024-12-10 05:53:24.301282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.628 [2024-12-10 05:53:24.301640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.628 [2024-12-10 05:53:24.301658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.628 [2024-12-10 05:53:24.301665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.629 [2024-12-10 05:53:24.301838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.629 [2024-12-10 05:53:24.302011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.629 [2024-12-10 05:53:24.302019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.629 [2024-12-10 05:53:24.302026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.629 [2024-12-10 05:53:24.302032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.629 [2024-12-10 05:53:24.314275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.629 [2024-12-10 05:53:24.314721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.629 [2024-12-10 05:53:24.314739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.629 [2024-12-10 05:53:24.314746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.629 [2024-12-10 05:53:24.314919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.629 [2024-12-10 05:53:24.315092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.629 [2024-12-10 05:53:24.315100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.629 [2024-12-10 05:53:24.315107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.629 [2024-12-10 05:53:24.315113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.629 [2024-12-10 05:53:24.320471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.629 [2024-12-10 05:53:24.320495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.629 [2024-12-10 05:53:24.320503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.629 [2024-12-10 05:53:24.320509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.629 [2024-12-10 05:53:24.320514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.629 [2024-12-10 05:53:24.321784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.629 [2024-12-10 05:53:24.321892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.629 [2024-12-10 05:53:24.321893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.629 [2024-12-10 05:53:24.327330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.629 [2024-12-10 05:53:24.327669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.629 [2024-12-10 05:53:24.327688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.629 [2024-12-10 05:53:24.327701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.629 [2024-12-10 05:53:24.327875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.629 [2024-12-10 05:53:24.328050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.629 [2024-12-10 05:53:24.328058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.629 [2024-12-10 05:53:24.328065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.629 [2024-12-10 05:53:24.328072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.629 [2024-12-10 05:53:24.340434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.629 [2024-12-10 05:53:24.340862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.629 [2024-12-10 05:53:24.340883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.629 [2024-12-10 05:53:24.340891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.629 [2024-12-10 05:53:24.341065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.629 [2024-12-10 05:53:24.341246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.629 [2024-12-10 05:53:24.341255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.629 [2024-12-10 05:53:24.341262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.629 [2024-12-10 05:53:24.341268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.629 [2024-12-10 05:53:24.353458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.629 [2024-12-10 05:53:24.353891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.629 [2024-12-10 05:53:24.353912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.629 [2024-12-10 05:53:24.353921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.629 [2024-12-10 05:53:24.354095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.629 [2024-12-10 05:53:24.354278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.629 [2024-12-10 05:53:24.354287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.629 [2024-12-10 05:53:24.354294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.629 [2024-12-10 05:53:24.354301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.629 [2024-12-10 05:53:24.366493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.629 [2024-12-10 05:53:24.366948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.629 [2024-12-10 05:53:24.366967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.629 [2024-12-10 05:53:24.366975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.629 [2024-12-10 05:53:24.367150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.629 [2024-12-10 05:53:24.367334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.629 [2024-12-10 05:53:24.367343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.629 [2024-12-10 05:53:24.367351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.629 [2024-12-10 05:53:24.367358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.629 [2024-12-10 05:53:24.379561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.629 [2024-12-10 05:53:24.379983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.629 [2024-12-10 05:53:24.380003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.629 [2024-12-10 05:53:24.380011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.629 [2024-12-10 05:53:24.380190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.629 [2024-12-10 05:53:24.380363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.629 [2024-12-10 05:53:24.380371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.629 [2024-12-10 05:53:24.380378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.629 [2024-12-10 05:53:24.380385] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.629 [2024-12-10 05:53:24.392631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.629 [2024-12-10 05:53:24.393076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.629 [2024-12-10 05:53:24.393094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.629 [2024-12-10 05:53:24.393101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.629 [2024-12-10 05:53:24.393279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.629 [2024-12-10 05:53:24.393453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.629 [2024-12-10 05:53:24.393461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.629 [2024-12-10 05:53:24.393469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.629 [2024-12-10 05:53:24.393475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.629 [2024-12-10 05:53:24.405665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.629 [2024-12-10 05:53:24.406119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.629 [2024-12-10 05:53:24.406137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.629 [2024-12-10 05:53:24.406145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.629 [2024-12-10 05:53:24.406324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.629 [2024-12-10 05:53:24.406498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.629 [2024-12-10 05:53:24.406507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.629 [2024-12-10 05:53:24.406520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.629 [2024-12-10 05:53:24.406527] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.629 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.629 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:36.629 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:36.629 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.630 [2024-12-10 05:53:24.418708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.630 [2024-12-10 05:53:24.419136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.630 [2024-12-10 05:53:24.419154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.630 [2024-12-10 05:53:24.419162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.630 [2024-12-10 05:53:24.419342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.630 [2024-12-10 05:53:24.419516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.630 [2024-12-10 05:53:24.419525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.630 [2024-12-10 05:53:24.419534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.630 [2024-12-10 05:53:24.419540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.630 [2024-12-10 05:53:24.431735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.630 [2024-12-10 05:53:24.432121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.630 [2024-12-10 05:53:24.432138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.630 [2024-12-10 05:53:24.432149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.630 [2024-12-10 05:53:24.432328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.630 [2024-12-10 05:53:24.432502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.630 [2024-12-10 05:53:24.432511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.630 [2024-12-10 05:53:24.432518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.630 [2024-12-10 05:53:24.432524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.630 [2024-12-10 05:53:24.444722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.630 [2024-12-10 05:53:24.445060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.630 [2024-12-10 05:53:24.445077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.630 [2024-12-10 05:53:24.445085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.630 [2024-12-10 05:53:24.445264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.630 [2024-12-10 05:53:24.445437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.630 [2024-12-10 05:53:24.445449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.630 [2024-12-10 05:53:24.445456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.630 [2024-12-10 05:53:24.445462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.630 [2024-12-10 05:53:24.457318] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.630 [2024-12-10 05:53:24.457833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.630 [2024-12-10 05:53:24.458274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.630 [2024-12-10 05:53:24.458291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.630 [2024-12-10 05:53:24.458298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.630 [2024-12-10 05:53:24.458471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.630 [2024-12-10 05:53:24.458645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.630 [2024-12-10 05:53:24.458653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.630 [2024-12-10 05:53:24.458659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.630 [2024-12-10 05:53:24.458665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.630 [2024-12-10 05:53:24.470872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.630 [2024-12-10 05:53:24.471296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.630 [2024-12-10 05:53:24.471313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.630 [2024-12-10 05:53:24.471321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.630 [2024-12-10 05:53:24.471495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.630 [2024-12-10 05:53:24.471668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.630 [2024-12-10 05:53:24.471676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.630 [2024-12-10 05:53:24.471682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.630 [2024-12-10 05:53:24.471689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.630 [2024-12-10 05:53:24.483900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.630 [2024-12-10 05:53:24.484260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.630 [2024-12-10 05:53:24.484280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.630 [2024-12-10 05:53:24.484287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.630 [2024-12-10 05:53:24.484460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.630 [2024-12-10 05:53:24.484632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.630 [2024-12-10 05:53:24.484640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.630 [2024-12-10 05:53:24.484647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.630 [2024-12-10 05:53:24.484653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.630 [2024-12-10 05:53:24.497004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.630 [2024-12-10 05:53:24.497419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.630 [2024-12-10 05:53:24.497436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.630 [2024-12-10 05:53:24.497444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.630 [2024-12-10 05:53:24.497616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.630 [2024-12-10 05:53:24.497789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.630 [2024-12-10 05:53:24.497797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.630 [2024-12-10 05:53:24.497803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.630 [2024-12-10 05:53:24.497810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.630 Malloc0 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.630 [2024-12-10 05:53:24.509992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.630 [2024-12-10 05:53:24.510430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.630 [2024-12-10 05:53:24.510447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aef7e0 with addr=10.0.0.2, port=4420 00:28:36.630 [2024-12-10 05:53:24.510455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef7e0 is same with the state(6) to be set 00:28:36.630 [2024-12-10 05:53:24.510639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aef7e0 (9): Bad file descriptor 00:28:36.630 [2024-12-10 05:53:24.510823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.630 [2024-12-10 05:53:24.510832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.630 [2024-12-10 05:53:24.510838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.630 [2024-12-10 05:53:24.510845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.630 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.889 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.889 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.889 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.889 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.889 [2024-12-10 05:53:24.522520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.889 [2024-12-10 05:53:24.522991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.889 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.889 05:53:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1344780 00:28:36.889 [2024-12-10 05:53:24.549515] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:37.827 4967.00 IOPS, 19.40 MiB/s [2024-12-10T04:53:26.659Z] 5896.86 IOPS, 23.03 MiB/s [2024-12-10T04:53:28.035Z] 6599.75 IOPS, 25.78 MiB/s [2024-12-10T04:53:28.971Z] 7139.78 IOPS, 27.89 MiB/s [2024-12-10T04:53:29.907Z] 7566.20 IOPS, 29.56 MiB/s [2024-12-10T04:53:30.842Z] 7923.91 IOPS, 30.95 MiB/s [2024-12-10T04:53:31.778Z] 8225.08 IOPS, 32.13 MiB/s [2024-12-10T04:53:32.713Z] 8480.85 IOPS, 33.13 MiB/s [2024-12-10T04:53:33.649Z] 8705.07 IOPS, 34.00 MiB/s [2024-12-10T04:53:33.907Z] 8892.13 IOPS, 34.73 MiB/s 00:28:46.011 Latency(us) 00:28:46.011 [2024-12-10T04:53:33.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.011 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:46.011 Verification LBA range: start 0x0 length 0x4000 00:28:46.011 Nvme1n1 : 15.04 8866.78 34.64 11115.39 0.00 6369.10 477.87 42442.36 00:28:46.011 [2024-12-10T04:53:33.907Z] =================================================================================================================== 00:28:46.011 [2024-12-10T04:53:33.907Z] Total : 8866.78 34.64 11115.39 0.00 6369.10 477.87 42442.36 00:28:46.011 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.012 rmmod nvme_tcp 00:28:46.012 rmmod nvme_fabrics 00:28:46.012 rmmod nvme_keyring 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.012 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1345805 ']' 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1345805 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1345805 ']' 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1345805 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1345805 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1345805' 00:28:46.271 killing process with pid 1345805 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1345805 00:28:46.271 05:53:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1345805 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.271 05:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:48.806 00:28:48.806 real 0m26.136s 00:28:48.806 user 1m1.338s 00:28:48.806 sys 0m6.671s 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.806 ************************************ 00:28:48.806 END TEST nvmf_bdevperf 00:28:48.806 ************************************ 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.806 ************************************ 00:28:48.806 START TEST nvmf_target_disconnect 00:28:48.806 ************************************ 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:48.806 * Looking for test storage... 00:28:48.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:48.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.806 --rc genhtml_branch_coverage=1 00:28:48.806 --rc genhtml_function_coverage=1 00:28:48.806 --rc genhtml_legend=1 00:28:48.806 --rc geninfo_all_blocks=1 00:28:48.806 --rc geninfo_unexecuted_blocks=1 00:28:48.806 00:28:48.806 ' 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:48.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.806 --rc genhtml_branch_coverage=1 00:28:48.806 --rc genhtml_function_coverage=1 00:28:48.806 --rc genhtml_legend=1 00:28:48.806 --rc geninfo_all_blocks=1 00:28:48.806 --rc geninfo_unexecuted_blocks=1 00:28:48.806 00:28:48.806 ' 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:48.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.806 --rc genhtml_branch_coverage=1 00:28:48.806 --rc genhtml_function_coverage=1 00:28:48.806 --rc genhtml_legend=1 00:28:48.806 --rc geninfo_all_blocks=1 00:28:48.806 --rc geninfo_unexecuted_blocks=1 00:28:48.806 00:28:48.806 ' 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:48.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.806 --rc genhtml_branch_coverage=1 00:28:48.806 --rc genhtml_function_coverage=1 00:28:48.806 --rc genhtml_legend=1 00:28:48.806 --rc geninfo_all_blocks=1 00:28:48.806 --rc geninfo_unexecuted_blocks=1 00:28:48.806 00:28:48.806 ' 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.806 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.807 05:53:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:55.376 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:55.376 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:55.376 Found net devices under 0000:af:00.0: cvl_0_0 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.376 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:55.377 Found net devices under 0000:af:00.1: cvl_0_1 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:28:55.377 00:28:55.377 --- 10.0.0.2 ping statistics --- 00:28:55.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.377 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:28:55.377 00:28:55.377 --- 10.0.0.1 ping statistics --- 00:28:55.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.377 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:55.377 ************************************ 00:28:55.377 START TEST nvmf_target_disconnect_tc1 00:28:55.377 ************************************ 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:55.377 [2024-12-10 05:53:42.537010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-12-10 05:53:42.537061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15750b0 with addr=10.0.0.2, port=4420 00:28:55.377 [2024-12-10 05:53:42.537098] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:55.377 [2024-12-10 05:53:42.537108] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:55.377 [2024-12-10 05:53:42.537115] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:55.377 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:55.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:55.377 Initializing NVMe Controllers 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.377 00:28:55.377 real 0m0.126s 00:28:55.377 user 0m0.047s 00:28:55.377 sys 0m0.078s 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.377 ************************************ 00:28:55.377 END TEST nvmf_target_disconnect_tc1 00:28:55.377 ************************************ 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:55.377 ************************************ 00:28:55.377 START TEST nvmf_target_disconnect_tc2 00:28:55.377 ************************************ 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1350925 00:28:55.377 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1350925 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1350925 ']' 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.378 [2024-12-10 05:53:42.680464] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:55.378 [2024-12-10 05:53:42.680507] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.378 [2024-12-10 05:53:42.759946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.378 [2024-12-10 05:53:42.800624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.378 [2024-12-10 05:53:42.800659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.378 [2024-12-10 05:53:42.800668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.378 [2024-12-10 05:53:42.800674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.378 [2024-12-10 05:53:42.800679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.378 [2024-12-10 05:53:42.804183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:55.378 [2024-12-10 05:53:42.804212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:55.378 [2024-12-10 05:53:42.804322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:55.378 [2024-12-10 05:53:42.804323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.378 Malloc0 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.378 [2024-12-10 05:53:42.971492] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.378 05:53:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.378 [2024-12-10 05:53:43.000411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.378 05:53:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.378 05:53:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:55.378 05:53:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.378 05:53:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.378 05:53:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.378 05:53:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1350987 00:28:55.378 05:53:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:55.378 05:53:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:57.293 05:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1350925 00:28:57.293 05:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 [2024-12-10 05:53:45.028216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 [2024-12-10 05:53:45.028430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Write completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.293 Read completed with error (sct=0, sc=8) 00:28:57.293 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 [2024-12-10 05:53:45.028631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Write completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 Read completed with error (sct=0, sc=8) 00:28:57.294 starting I/O failed 00:28:57.294 [2024-12-10 05:53:45.028829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:57.294 [2024-12-10 05:53:45.028942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.028965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.029117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.029128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.029327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.029338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.029427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.029437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.029632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.029642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.029734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.029743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.029945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.029983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.030183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.030217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.030414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.030446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.030547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.030577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.030746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.030778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.030899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.030931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.031064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.031108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.031306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.031317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.031375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.031385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.031474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.031484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.031624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.031633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.031723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.031733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.031817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.031827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.032033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.032044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.032124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.032133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.032315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.032326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.032409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.032419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.032563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.032574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.032644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.032653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.032722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.032732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.032801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.032811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.032880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.032891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.032965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.032974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.033041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.033051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.033107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.033116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.033206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.033216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.033309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.033319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.033393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.033403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.033469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.033478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.033602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.033612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.033753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.033763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.033886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.033895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.033951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.033960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.034044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.034053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.294 [2024-12-10 05:53:45.034145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.294 [2024-12-10 05:53:45.034155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.294 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.034230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.034240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.034300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.034309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.034517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.034527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.034654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.034663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.034737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.034746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.034835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.034847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.034905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.034914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.035053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.035063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.035186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.035212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.035371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.035381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.035530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.035541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.035632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.035642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.035717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.035726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.035855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.035865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.035952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.035962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.036088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.036098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.036156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.036172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.036254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.036264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.036399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.036408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.036533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.036543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.036698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.036709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.036774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.036783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.036939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.036949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.037091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.037101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.037230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.037241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.037371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.037381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.037457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.037467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.037613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.037624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.037749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.037759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.037829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.037839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.037993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.038003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.038247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.038258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.038349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.038359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.038553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.038563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.038705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.038715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.038848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.038858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.038941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.038950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.039096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.039106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.039188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.039198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.039284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.039294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.039373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.039382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.039510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.039522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.039664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.039678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.039748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.039760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.039833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.039846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.039911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.039926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.040057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.040070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.040145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.040157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.040236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.040248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.040384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.040397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.040462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.040475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.040621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.040634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.040712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.040724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.040872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.040886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.041024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.041038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.041119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.041132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.041205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.041219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.041295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.295 [2024-12-10 05:53:45.041308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.295 qpair failed and we were unable to recover it. 00:28:57.295 [2024-12-10 05:53:45.041388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.041402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.041489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.041503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.041678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.041692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.041756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.041768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.041831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.041844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.041986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.041999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.042070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.042083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.042147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.042159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.042250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.042263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.042339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.042352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.042432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.042446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.042574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.042587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.042656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.042669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.042811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.042824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.042897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.042910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.042988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.043000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.043141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.043154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.043243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.043256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.043330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.043343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.043422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.043435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.043567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.043580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.043728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.043741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.043831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.043844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.043924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.043937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.044028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.044041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.044194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.044208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.044277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.044289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.044362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.044377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.044519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.044532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.044678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.044692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.044764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.044777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.044909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.044922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.296 [2024-12-10 05:53:45.044987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.296 [2024-12-10 05:53:45.044999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.296 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.045063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.045075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.045151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.045164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.045251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.045264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.045333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.045346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.045423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.045435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.045517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.045529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.045607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.045620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.045693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.045705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.045843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.045856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.046003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.046017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.046223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.046238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.046422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.046435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.046523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.046536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.046685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.046698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.046776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.046788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.046933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.046947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.047043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.047057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.047220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.047234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.047325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.047338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.047487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.047500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.047574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.047586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.047808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.047821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.047910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.047923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.048055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.048068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.048185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.048200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.048282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.048295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.048379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.048394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.048544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.048557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.048638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.048652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.048733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.048748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.048817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.048830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.048907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.048922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.048991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.049004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.049074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.049087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.049301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.049318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.049391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.049408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.049558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.049577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.049785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.049803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.049912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.049956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.050059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.050091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.050202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.050236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.050411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.050444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.050576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.050608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.050803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.050821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.050992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.051024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.051295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.051327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.051608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.051640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.051755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.051787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.051908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.297 [2024-12-10 05:53:45.051940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.297 qpair failed and we were unable to recover it. 00:28:57.297 [2024-12-10 05:53:45.052122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.052140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.052330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.052363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.052532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.052564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.052684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.052717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.052887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.052919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.053103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.053134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.053272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.053305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.053480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.053512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.053751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.053783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.054041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.054083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.054262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.054281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.054375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.054393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.054478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.054496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.054654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.054672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.054765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.054783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.054874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.054892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.054987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.055022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.055192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.055225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.055333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.055365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.055601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.055632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.055822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.055854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.055971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.055989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.056133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.056150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.056365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.056416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.056513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.056533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.056677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.056701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.056877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.056895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.056989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.057022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.057284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.057317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.057577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.057619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.057763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.057780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.057969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.058000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.058228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.058262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.058466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.058497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.058680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.058711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.058853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.058870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.059092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.059110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.059327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.059345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.059565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.059589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.059815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.059840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.059944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.059967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.060158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.060192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.060313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.060337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.060513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.060544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.060748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.060778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.060973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.061005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.061184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.061210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.061404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.061428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.061664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.061695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.061828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.061860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.062039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.062070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.062338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.062370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.062591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.062624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.062857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.062888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.063076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.063106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.063286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.063319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.063505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.063536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.298 [2024-12-10 05:53:45.063780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.298 [2024-12-10 05:53:45.063812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.298 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.063985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.064028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.064129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.064154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.064268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.064292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.064533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.064556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.064745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.064769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.064861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.064885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.065055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.065080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.065272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.065303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.065461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.065485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.065652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.065684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.065812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.065842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.066023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.066054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.066301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.066332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.066452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.066482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.066683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.066715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.066826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.066856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.067119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.067143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.067254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.067279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.067375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.067399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.067505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.067529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.067751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.067774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.067886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.067910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.068025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.068050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.068205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.068231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.068396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.068427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.068622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.068653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.068767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.068798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.068929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.068970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.069176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.069201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.069368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.069393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.069591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.069620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.069814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.069846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.069951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.069982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.070164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.070197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.070296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.070320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.070428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.070452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.070639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.070669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.070873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.070905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.071015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.071047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.071187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.071219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.071341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.071372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.071607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.071638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.071759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.071790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.071983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.072015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.072125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.072156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.072441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.072473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.072584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.072615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.072834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.072872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.073061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.073092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.073209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.073242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.073482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.073513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.073721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.073752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.073966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.073997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.074176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.299 [2024-12-10 05:53:45.074207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.299 qpair failed and we were unable to recover it. 00:28:57.299 [2024-12-10 05:53:45.074449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.074480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.074762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.074793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.075047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.075078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.075196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.075229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.075345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.075377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.075564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.075595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.075803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.075833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.075963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.075995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.076239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.076271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.076448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.076479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.076679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.076710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.076930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.076961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.077221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.077254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.077375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.077406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.077610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.077641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.077896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.077926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.078118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.078149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.078353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.078384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.078523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.078553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.078731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.078762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.078885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.078917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.079162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.079201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.079371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.079401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.079583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.079613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.079827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.079859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.080131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.080162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.080423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.080455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.080575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.080606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.080816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.080847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.081110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.081141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.081334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.081365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.081491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.081522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.081633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.081664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.081818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.081855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.082026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.082057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.082229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.082262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.082386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.082418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.082613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.082645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.082883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.082914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.083152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.083191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.083447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.083478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.300 [2024-12-10 05:53:45.083718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.300 [2024-12-10 05:53:45.083749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.300 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.083990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.084021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.084194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.084226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.084394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.084425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.084598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.084630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.084744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.084775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.084955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.084987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.085164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.085206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.085399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.085430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.085551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.085582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.085698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.085730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.085894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.085925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.086129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.086160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.086294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.086325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.086527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.086558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.086816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.086847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.087086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.087117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.087295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.087327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.087505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.087536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.087855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.087926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.088066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.088103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.088301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.088337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.088514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.088546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.088666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.088699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.088897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.088929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.089123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.089154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.089366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.089399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.089582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.089614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.089804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.089836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.090004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.090036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.090241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.090275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.090473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.090505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.090618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.090650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.090922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.090954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.091091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.091122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.091315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.091348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.091520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.091552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.091788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.091820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.091946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.091978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.092253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.092285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.092462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.092493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.092751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.092783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.092913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.092945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.093192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.093225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.093413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.093444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.093570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.093602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.093717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.093754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.093879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.093911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.094094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.094125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.094321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.094354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.094534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.094567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.094759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.094791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.094993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.095025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-12-10 05:53:45.095203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-12-10 05:53:45.095236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.095473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.095504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.095803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.095835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.096019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.096052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.096154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.096202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.096471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.096503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.096750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.096781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.096993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.097025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.097246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.097279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.097460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.097491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.097674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.097706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.097839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.097870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.097992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.098024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.098145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.098185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.098374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.098405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.098579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.098611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.098746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.098778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.098946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.098977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.099149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.099193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.099314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.099346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.099605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.099642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.099811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.099843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.100106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.100137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.100311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.100383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.100641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.100676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.100886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.100918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.101046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.101077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.101211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.101246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.101511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.101542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.101663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.101694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.101877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.101908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.102026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.102057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.102316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.102348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.102539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.102571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.102767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.102799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.103047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.103077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.103194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.103229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.103365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.103396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.103669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.103700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.103878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.103909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.104029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.104060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.104262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.104295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.104490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.104521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.104759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.104791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.104914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.104945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.105205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.105239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-12-10 05:53:45.105411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-12-10 05:53:45.105442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.105682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.105719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.105904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.105935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.106062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.106093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.106207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.106239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.106504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.106535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.106661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.106693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.106879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.106909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.107156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.107199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.107443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.107474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.107687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.107718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.107898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.107929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.108188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.108220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.108427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.108458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.108634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.108665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.108927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.108959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.109140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.109183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.109356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.109388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.109512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.109543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.109713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.109745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.109927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.109958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.110136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.110178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.110349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.110380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.110570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.110601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.110786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.110817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.110936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.110967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.111160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.111201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.111322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.111353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.111478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.111510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.111707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.111739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.111879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.111909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.112096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.112128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.112273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.112306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-12-10 05:53:45.112495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-12-10 05:53:45.112527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.112716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.112747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.112873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.112905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.113087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.113118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.113330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.113362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.113531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.113562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.113768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.113799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.113932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.113963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.114147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.114195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.114349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.114381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.114546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.114577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.114838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.114869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.114980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.115011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.115115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.115146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.115401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.115432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.115555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.115586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.115825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.115856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.116082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.116113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.116335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.116369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.116615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.116646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.116757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.116788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.116985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.117016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.117154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.117199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.117375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.117406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.117573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.117604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.117785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.117816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.117931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.117962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.118136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.118177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.118304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.118337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.118616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.118647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.118905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.118936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.119048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.119078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.119251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.119284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.119564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.119594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.119772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.119803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.119937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.119970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.120257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.120289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.120502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.120533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.120768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.120799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.120971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.121002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.121213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.121246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.121509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.121540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.121642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.121672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.121931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.121962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-12-10 05:53:45.122151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-12-10 05:53:45.122189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.122314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.122345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.122480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.122512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.122642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.122673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.122935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.122972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.123175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.123207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.123464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.123496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.123683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.123713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.123845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.123877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.124085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.124116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.124327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.124360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.124547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.124578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.124766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.124797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.125051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.125082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.125286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.125319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.125570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.125601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.125810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.125842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.126015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.126046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.126292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.126325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.126587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.126618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.126743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.126775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.126960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.126991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.127246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.127279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.127395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.127426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.127664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.127695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.127955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.127987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.128200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.128233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.128354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.128385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.128560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.128592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.128711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.128742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.128955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.128986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.129186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.129220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.129400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.129431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.129721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.129753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.129929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.129961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.130094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.130125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.130256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.130288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.130535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.130566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.130764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.130795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.130968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.130999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.131193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.131226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.131398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.131429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.131547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.131578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.131774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.131805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.132049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.132087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.132221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.132253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.132439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.132470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-12-10 05:53:45.132598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-12-10 05:53:45.132629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.132810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.132842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.133054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.133085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.133209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.133241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.133341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.133373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.133547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.133578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.133770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.133802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.133988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.134020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.134192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.134224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.134373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.134404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.134604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.134635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.134823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.134855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.134966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.134997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.135185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.135221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.135340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.135371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.135545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.135575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.135846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.135880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.136004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.136035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.137579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.137637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.137948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.137983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.138124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.138156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.138364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.138399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.138521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.138553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.138737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.138770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.138911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.138946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.139190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.139225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.139419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.139450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.139572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.139604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.139752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.139785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.139972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.140003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.140206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.140239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.140443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.140476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.140602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.140635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.140739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.140771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.140887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.140918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.141107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.141140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.141273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.141304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.141488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.141526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.141703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.141735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.141841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.141873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.142003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.142034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.142222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.142255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.142379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.142411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.142593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.142625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.142755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.142786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.142909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.142942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.143065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.143096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.143213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.143247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.143367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.143399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.143533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.143566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.143671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-12-10 05:53:45.143703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-12-10 05:53:45.143814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.143846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.143959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.143990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.144122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.144155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.144303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.144336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.144449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.144480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.144656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.144687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.144797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.144829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.144955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.144986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.145087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.145118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.145299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.145333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.145530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.145561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.145682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.145712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.145900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.145932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.146058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.146090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.146225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.146258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.146439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.146471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.146588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.146620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.146799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.146829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.146937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.146968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.147215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.147251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.147360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.147392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.147642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.147675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.147850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.147882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.148010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.148041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.148218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.148251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.148366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.148398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.148523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-12-10 05:53:45.148561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-12-10 05:53:45.148672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.148704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.148829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.148861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.149036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.149067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.149256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.149289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.149393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.149424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.149680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.149712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.149911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.149942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.150136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.150184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.150356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.150388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.150615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.150647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.150762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.150794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.151581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.151627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.151909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.151943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.152074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.152106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.152290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.152326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.152456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.152488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.152666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.152698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.152832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.152865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.152981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.153014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.153219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.153252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.153450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.153482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.153590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.153621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.153806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.153837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.154036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.154068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.154210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.154242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.154396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.154429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.154607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.154678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.154821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.154856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.155096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.155128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.155354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.155388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.155589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.155622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.155750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.155782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.155912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.155944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.156055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.156087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.156190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.156223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.156360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.156393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.156508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.156540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.156716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.156748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.156870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.156901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.157098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.157131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-12-10 05:53:45.157256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-12-10 05:53:45.157289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.157471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.157503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.157626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.157657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.157776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.157808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.157916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.157948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.158061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.158093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.158272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.158306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.158490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.158524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.158633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.158664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.158861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.158893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.159027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.159060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.159243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.159277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.159378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.159412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.159540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.159579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.159755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.159789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.159909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.159942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.160128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.160161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.160292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.160324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.160492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.160524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.160648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.160680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.160856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.160889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.161009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.161041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.161214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.161247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.161348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.161380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.161500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.161532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.161729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.161762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.161880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.161913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.162124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.162156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.162354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.162387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.162498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.162530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.162717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.162749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.163007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.163039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.163209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.163243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.163410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.163443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.163545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.163576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.163695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.163727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.163975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.164008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.164115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.164146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.164362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.164395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-12-10 05:53:45.164504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-12-10 05:53:45.164536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.164758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.164795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.164914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.164946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.165076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.165108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.165225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.165257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.165444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.165476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.165604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.165636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.165755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.165787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.165953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.165985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.166093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.166125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.166237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.166269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.166450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.166482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.166653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.166685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.166812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.166843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.167091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.167122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.167264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.167302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.167406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.167436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.167558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.167589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.167710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.167743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.167851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.167882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.168003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.168035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.168221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.168255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.168501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.168532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.168718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.168751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.168992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.169024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.169145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.169191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.169314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.169347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.169452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.169486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.169677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.169709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.169825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.169859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-12-10 05:53:45.169974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-12-10 05:53:45.170004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.170187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.170219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.170395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.170427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.170531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.170561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.170743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.170773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.170897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.170928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.171038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.171070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.171193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.171224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.171416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.171449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.171628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.171661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.171840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.171871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.171982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.172013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.172138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.172186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.172429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.172461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.172569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.172600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.172707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.172739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.172919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.172952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.173069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.173100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.173230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.173263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.173437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.173469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.173576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.173607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.173780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.173820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.173994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.174026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.174197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.174230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.174345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.174374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.174489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.174521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.174630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.174661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.174859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.174891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.175007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.175040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.175162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.175204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-12-10 05:53:45.175323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-12-10 05:53:45.175356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.175492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.175523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.175717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.175748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.175989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.176020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.176228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.176262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.176429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.176501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.176634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.176670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.176845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.176877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.177038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.177070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.177260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.177309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.177502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.177534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.177646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.177679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.177850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.177882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.178066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.178097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.178213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.178247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.178366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.178398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.178585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.178617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.178733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.178766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.178871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.178902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.179087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.179118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.179390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.179423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.179551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.179583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.179690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.179721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.179918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.179950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.180060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.180094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.180207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.180240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.180414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.180446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.180636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.180669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.180782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.180814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.180964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.180996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.181113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.181144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.604 [2024-12-10 05:53:45.181274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.604 [2024-12-10 05:53:45.181306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.604 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.181411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.181443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.181547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.181578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.181750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.181782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.181952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.181983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.182089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.182122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.182257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.182290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.182394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.182425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.182540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.182572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.182762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.182793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.182915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.182947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.183147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.183191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.183364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.183396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.183572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.183604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.183780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.183811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.183983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.184014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.184253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.184287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.184472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.184503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.184738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.184776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.184896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.184928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.185049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.185081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.185199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.185232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.185409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.185441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.185557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.185588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.185777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.185809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.185917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.185949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.186192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.186225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.186363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.186394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.186579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.186612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.186725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.186756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.186928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.186960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.187139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.187179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.187303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.187335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.187445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.187476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.187588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.187619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.187891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.187923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.188027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.188059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.188246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.188279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.188455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.188486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.188601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.188632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.188819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.188850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.188973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.189005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.189127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.189158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.189349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.189382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.189499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.189531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.189690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.189774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.189925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.189962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.190196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.190231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.190345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.190377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.190569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.190601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.190734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.190766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.190936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.190967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.191070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.191102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.191219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.191253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.191425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.191458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.191666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.191699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.191871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.191902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.192084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.192116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.192298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.192340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.192453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.192485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.192614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.192646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.192768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.192800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.192925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.192957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.193128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.193159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.193294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.193325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.605 [2024-12-10 05:53:45.193428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.605 [2024-12-10 05:53:45.193460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.605 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.193576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.193608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.193786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.193817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.193921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.193952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.194119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.194151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.194279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.194311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.194548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.194580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.194713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.194745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.195018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.195050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.195245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.195278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.195386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.195418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.195602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.195633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.195739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.195770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.195881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.195913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.196034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.196064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.196249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.196281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.196415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.196447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.196648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.196679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.196795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.196826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.196943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.196975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.197151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.197215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.197329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.197361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.197479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.197510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.197624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.197656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.197763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.197795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.197966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.197997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.198113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.198145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.198335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.198368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.198483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.198514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.198710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.198742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.198912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.198944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.199061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.199092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.199195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.199227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.199345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.199377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.199579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.199610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.199795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.199827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.199942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.199975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.200190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.200223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.200400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.200433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.200532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.200563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.200667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.200699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.200805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.200837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.200958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.200990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.201188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.201220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.201348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.201379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.201566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.201597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.201728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.201759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.201971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.202004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.202116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.202147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.202271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.202303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.202424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.202456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.202585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.202615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.202738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.202770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.202942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.202973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.203146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.203184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.203311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.203342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.203518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.203549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.203683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.203714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.203841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.203874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.204002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.204034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.204138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.204196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.606 [2024-12-10 05:53:45.204307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.606 [2024-12-10 05:53:45.204339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.606 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.204443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.204474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.204607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.204637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.204741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.204773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.204886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.204917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.205041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.205072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.205206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.205239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.205365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.205397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.205506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.205537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.205745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.205776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.205892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.205924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.206127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.206159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.206340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.206372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.206490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.206522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.206630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.206662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.206847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.206878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.206990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.207021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.207125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.207156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.207268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.207300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.207425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.207456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.207590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.207622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.207752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.207783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.207903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.207934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.208115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.208146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.208253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.208286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.208405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.208436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.208556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.208588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.208779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.208810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.208927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.208958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.209136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.209183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.209319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.209351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.209452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.209483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.209660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.209691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.209796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.209828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.209929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.209961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.210074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.210104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.210211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.210244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.210435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.210467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.210639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.210670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.210770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.210808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.210998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.211030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.211218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.211250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.211459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.211490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.211662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.211694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.211875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.211906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.212009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.212041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.212212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.212245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.212361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.212393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.212572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.212603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.212726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.212758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.212887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.212918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.213138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.213176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.213301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.213333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.607 [2024-12-10 05:53:45.213510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-12-10 05:53:45.213542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.607 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.213710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.213742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.213845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.213877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.213979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.214010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.214132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.214164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.214371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.214403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.214599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.214631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.214816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.214848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.214980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.215011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.215120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.215152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.215265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.215297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.215416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.215448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.215627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.215659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.215856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.215888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.216010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.216041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.216159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.216201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.216388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.216420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.216589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.216620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.216746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.216777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.216954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.216986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.217121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.217152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.217297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.217329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.217439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.217471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.217594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.217626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.217804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.217835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.218095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.218126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.218306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.218345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.218472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.218503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.218606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.218637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.218843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.218875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.219005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.219036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.219145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.219189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.219380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.219411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.219582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.219613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.219724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.219756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.219887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.219919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.220121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.220152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.220353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.220385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.220514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.220546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.220727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.220759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.220866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.220898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.221006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.221038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.221209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.221243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.221347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.221379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.221503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.221534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.221640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.221672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.221776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.221807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.221933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.221965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.222085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.222117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.222239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.222271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.222390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.222422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.222541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.222573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.222711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.222744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.222857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.222888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.223015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.223047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.223220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.223254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.223365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.223395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.223514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.223545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.223652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.223683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.223792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.223823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.224010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.224042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.224172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.224204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.224373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.224404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.608 [2024-12-10 05:53:45.224622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-12-10 05:53:45.224653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.608 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.224760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.224791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.224972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.225002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.225122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.225159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.225317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.225350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.225524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.225555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.225671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.225702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.225895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.225927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.226032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.226062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.226239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.226273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.226380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.226411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.226585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.226616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.226722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.226753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.226920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.226952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.227061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.227092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.227203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.227247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.227386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.227418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.227546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.227577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.227716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.227748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.227862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.227894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.228000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.228031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.228140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.228177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.228312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.228344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.228452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.228483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.228583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.228614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.228728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.228759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.228862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.228893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.229004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.229035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.229214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.229246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.229482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.229514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.229630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.229662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.229764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.229794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.229908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.229940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.230055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.230085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.230265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.230298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.230406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.230437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.230542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.230574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.230686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.230717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.230839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.230870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.230984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.231016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.231119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.231150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.231289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.231320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.231496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.231528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.231709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.231745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.231851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.231882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.232010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.232041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.232229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.232261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.232392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.232421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.232531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.232560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.232661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.232689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.232855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.232887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.233050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.233077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.233214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.233243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.233348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.233375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.233560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.233590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.233718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.233746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.233850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.233878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.233989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.234019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.234123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.234151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.234285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.234315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.234416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.234446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.234557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.234586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.234687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.234715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.234828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.234857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.234954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-12-10 05:53:45.234982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.609 qpair failed and we were unable to recover it. 00:28:57.609 [2024-12-10 05:53:45.235086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.235115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.235222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.235252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.235355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.235383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.235480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.235508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.235604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.235633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.235737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.235766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.235956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.235984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.236097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.236126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.236236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.236266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.236474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.236501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.236615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.236644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.236741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.236769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.236874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.236902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.237024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.237053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.237197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.237227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.237409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.237438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.237621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.237649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.237758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.237786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.237955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.237993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.238103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.238132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.238322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.238353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.238464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.238492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.238667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.238696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.238860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.238889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.239006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.239034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.239135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.239163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.239276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.239304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.239416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.239445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.239614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.239643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.239742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.239771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.240033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.240061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.240240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.240270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.240507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.240536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.240633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.240662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.240766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.240795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.240908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.240936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.241117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.241146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.241320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.241349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.241532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.241560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.241726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.241754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.241912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.241942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.242045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.242073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.242185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.242214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.242312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.242341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.242455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.242483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.610 qpair failed and we were unable to recover it. 00:28:57.610 [2024-12-10 05:53:45.242653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.610 [2024-12-10 05:53:45.242681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.242862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.242890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.242999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.243028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.243137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.243192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.243299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.243328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.243448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.243477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.243652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.243681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.243781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.243810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.243982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.244010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.244187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.244217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.244313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.244343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.244443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.244472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.244662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.244691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.244789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.244823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.244941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.244970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.245132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.245161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.245410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.245439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.245618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.245647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.245775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.245804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.245921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.245950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.246046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.246074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.246187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.246218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.246384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.246413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.246533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.246562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.246768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.246797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.246905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.246933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.247107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.247136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.247259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.247289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.247460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.247489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.247721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.247749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.247935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.247964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.248143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.248177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.248344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.248373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.248483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.248512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.248625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.248653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.248763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.248792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.248885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.248914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.249078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.249106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.249211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.249241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.249344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.249373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.249491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.249520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.249774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.249803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.249920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.249948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.250047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.250076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.250192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.250222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.250401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.250430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.250616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.250645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.250753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.250782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.250962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.250990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.251205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.251235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.251414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.251444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.251554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.251583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.251686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.251715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.251897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.251931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.252094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.252122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.252381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.252410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.252577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.252606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.252877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.252906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.611 qpair failed and we were unable to recover it. 00:28:57.611 [2024-12-10 05:53:45.253158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.611 [2024-12-10 05:53:45.253193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.253427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.253455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.253699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.253727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.253894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.253922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.254093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.254121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.254370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.254400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.254630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.254658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.254773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.254802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.254907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.254936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.255118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.255147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.255264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.255294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.255458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.255486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.255651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.255680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.255938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.255968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.256185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.256215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.256341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.256370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.256496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.256525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.256787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.256816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.256912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.256941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.257039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.257067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.257205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.257235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.257423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.257452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.257582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.257612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.257873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.257901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.258189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.258218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.258470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.258499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.258689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.258717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.258883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.258911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.259077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.259105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.259205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.259235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.259466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.259495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.259670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.259698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.259927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.259956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.260077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.260105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.260215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.260246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.260439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.260473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.260653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.260681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.260877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.260905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.261110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.261139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.261333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.261362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.261541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.261570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.261683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.261712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.261895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.261924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.262034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.262062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.262225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.262256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.262387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.262416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.262597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.262625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.262807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.262835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.263032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.263063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.263261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.263294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.263413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.263444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.263626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.263658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.263834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.263865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.264049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.264080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.264276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.264309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.264520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.264551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.264743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.264775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.264960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.264992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.265160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.265198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.265316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.265347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.265469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.265501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.265758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.265789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.265928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.265960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.266204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.266237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.612 [2024-12-10 05:53:45.266342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.612 [2024-12-10 05:53:45.266373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.612 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.266552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.266582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.266771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.266803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.266919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.266950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.267139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.267179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.267390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.267421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.267547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.267578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.267772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.267803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.267990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.268021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.268258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.268290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.268406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.268437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.268571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.268609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.268791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.268822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.269000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.269032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.269153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.269195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.269371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.269402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.269592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.269623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.269799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.269829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.270045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.270075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.270266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.270299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.270482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.270513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.270701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.270733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.270919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.270949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.271120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.271152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.271329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.271361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.271637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.271668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.271855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.271887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.272096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.272128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.272326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.272359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.272624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.272655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.272826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.272858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.272975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.273006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.273217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.273250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.273430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.273461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.273574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.273605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.273777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.273808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.274076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.274109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.274355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.274387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.274549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.274621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.274850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.274886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.275084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.275117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.275319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.275352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.275535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.275567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.275692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.275724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.275841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.275872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.276008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.276040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.276212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.276245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.276507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.276539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.276674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.276706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.276917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.276950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.277071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.277103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.613 qpair failed and we were unable to recover it. 00:28:57.613 [2024-12-10 05:53:45.277362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.613 [2024-12-10 05:53:45.277409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.277597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.277630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.277870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.277902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.278021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.278053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.278241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.278275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.278453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.278485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.278671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.278703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.278826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.278858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.279026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.279058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.279249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.279283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.279410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.279443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.279561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.279593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.279793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.279826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.279946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.279978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.280252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.280287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.280499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.280530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.280648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.280681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.280959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.280996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.281131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.281181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.281446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.281479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.281690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.281724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.281891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.281925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.282117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.282151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.282281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.282314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.282442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.282475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.282746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.282781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.282919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.282953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.283132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.283178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.283294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.283327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.283433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.283465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.283590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.283621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.283872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.283903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.284083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.284115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.284301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.284334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.284509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.284541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.284729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.284760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.284887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.284919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.285025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.285057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.285189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.285221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.285335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.285366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.285488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.285531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.285819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.285850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.285968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.286000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.286122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.286156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.286334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.286366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.286500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.286532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.286634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.286667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.286783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.286816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.286949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.286982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.287110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.287143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.287334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.287366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.287488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.287519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.287806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.287837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.287945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.287977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.288112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.288144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.288329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.288361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.288592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.288623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.288754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.288786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.288904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.288934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.289042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.289075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.614 [2024-12-10 05:53:45.289206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.614 [2024-12-10 05:53:45.289239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.614 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.289423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.289455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.289575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.289607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.289723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.289757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.289927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.289959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.290139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.290180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.290284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.290316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.290451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.290492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.290614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.290649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.290753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.290786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.290914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.290946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.291083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.291116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.291311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.291344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.291464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.291496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.291757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.291791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.291914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.291946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.292067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.292101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.292293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.292327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.292472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.292505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.292641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.292674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.292781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.292813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.292928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.292961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.293083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.293115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.293248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.293281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.293390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.293423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.293611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.293644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.293882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.293914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.294035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.294069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.294280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.294314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.294429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.294462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.294583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.294616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.294734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.294767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.294878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.294911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.295036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.295069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.295206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.295241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.295355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.295388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.295506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.295540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.295723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.295755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.295937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.295970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.296094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.296128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.296240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.296276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.296389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.296422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.296526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.296558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.296758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.296790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.296909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.296940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.297116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.297148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.297270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.297304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.297479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.297518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.297630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.615 [2024-12-10 05:53:45.297663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.615 qpair failed and we were unable to recover it. 00:28:57.615 [2024-12-10 05:53:45.297838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.619 [2024-12-10 05:53:45.297870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.619 qpair failed and we were unable to recover it. 00:28:57.619 [2024-12-10 05:53:45.298097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.619 [2024-12-10 05:53:45.298129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.619 qpair failed and we were unable to recover it. 00:28:57.619 [2024-12-10 05:53:45.298258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.619 [2024-12-10 05:53:45.298292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.619 qpair failed and we were unable to recover it. 00:28:57.619 [2024-12-10 05:53:45.298402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.619 [2024-12-10 05:53:45.298434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.619 qpair failed and we were unable to recover it. 00:28:57.619 [2024-12-10 05:53:45.298679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.619 [2024-12-10 05:53:45.298712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.619 qpair failed and we were unable to recover it. 00:28:57.619 [2024-12-10 05:53:45.298832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.619 [2024-12-10 05:53:45.298865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.619 qpair failed and we were unable to recover it. 00:28:57.619 [2024-12-10 05:53:45.298976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.619 [2024-12-10 05:53:45.299008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.619 qpair failed and we were unable to recover it. 00:28:57.619 [2024-12-10 05:53:45.299190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.619 [2024-12-10 05:53:45.299224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.619 qpair failed and we were unable to recover it. 00:28:57.619 [2024-12-10 05:53:45.299401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.619 [2024-12-10 05:53:45.299435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.619 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.299556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.299589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.299717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.299750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.299867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.299900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.300009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.300042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.300216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.300249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.300351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.300383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.300490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.300523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.300635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.300667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.300840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.300873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.300995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.301028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.301144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.301185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.301292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.301324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.301431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.301462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.301581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.301613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.301732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.301763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.301951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.301983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.302256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.302292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.302407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.302440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.302558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.302592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.302772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.302804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.302911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.302943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.303047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.303080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.303193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.303228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.303353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.303387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.303510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.303543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.303676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.303709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.303823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.303855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.304029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.304061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.620 qpair failed and we were unable to recover it. 00:28:57.620 [2024-12-10 05:53:45.304175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.620 [2024-12-10 05:53:45.304210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.304405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.304445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.304636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.304670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.304779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.304811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.304987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.305021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.305136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.305174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.305311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.305342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.305451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.305485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.305595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.305629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.305756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.305788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.305908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.305940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.306066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.306098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.306250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.306284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.306406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.306439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.306616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.306651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.306837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.306868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.306985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.307016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.307122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.307152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.307344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.307375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.307551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.307581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.307704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.307735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.307921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.307951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.308124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.308154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.308282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.308313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.308486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.308517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.308617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.308648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.308754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.308784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.308902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.308932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.309060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.309090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.309196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.309228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.309402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.309432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.309623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.309654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.309838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.309869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.309979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.310010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.310124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.310155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.310285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.310319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.621 qpair failed and we were unable to recover it. 00:28:57.621 [2024-12-10 05:53:45.310493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.621 [2024-12-10 05:53:45.310525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.310711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.310742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.310928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.310960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.311065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.311095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.311213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.311245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.311368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.311405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.311585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.311616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.311720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.311750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.311853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.311884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.312056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.312086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.312258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.312290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.312487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.312518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.312677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.312707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.312945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.312984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.313107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.313138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.313259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.313293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.313495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.313528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.313647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.313679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.313853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.313884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.314083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.314115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.314323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.314357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.314551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.314582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.314777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.314809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.315078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.315110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.315246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.315278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.315450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.315481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.315656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.315687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.315855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.315885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.316121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.316154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.316434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.316467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.316648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.316679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.316874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.316905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.317031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.317062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.317189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.317221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.317472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.317505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.317680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.317711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.317887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.317917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.318086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.318118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.318398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.318432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.622 [2024-12-10 05:53:45.318612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.622 [2024-12-10 05:53:45.318652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.622 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.318774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.318807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.319049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.319081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.319320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.319356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.319460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.319490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.319594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.319624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.319886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.319923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.320096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.320129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.320270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.320303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.320500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.320532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.320648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.320679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.320864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.320896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.321010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.321042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.321188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.321221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.321383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.321415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.321552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.321591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.321724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.321760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.321935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.321968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.322197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.322231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.322435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.322468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.322606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.322639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.322879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.322933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.323040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.323072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.323341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.323377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.323619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.323651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.323863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.323895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.324112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.324144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.324359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.324393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.324579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.324612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.324716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.324749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.325006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.325040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.325223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.325257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.325384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.325416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.325621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.325655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.325854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.325888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.326061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.326093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.326308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.326342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.326447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.326479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.326649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.326682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.326794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.326833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.326939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.326972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.327142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.327187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.327325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.327357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.327533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.327566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.623 [2024-12-10 05:53:45.327750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.623 [2024-12-10 05:53:45.327782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.623 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.327958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.327991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.328291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.328335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.328470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.328503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.328640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.328676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.328918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.328950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.329133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.329165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.329299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.329332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.329508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.329581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.329732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.329769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.329958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.329992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.330193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.330227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.330422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.330453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.330646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.330676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.330844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.330875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.330985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.331017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.331155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.331200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.331382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.331414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.331539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.331571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.331693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.331723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.331838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.331869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.332067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.332099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.332283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.332317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.332499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.332531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.332711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.332741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.332855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.332887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.624 qpair failed and we were unable to recover it. 00:28:57.624 [2024-12-10 05:53:45.333054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.624 [2024-12-10 05:53:45.333088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.333210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.333242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.333413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.333444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.333689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.333728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.333851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.333883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.334057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.334089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.334217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.334251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.334427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.334459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.334662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.334694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.334870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.334902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.335106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.335138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.335388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.335420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.335531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.335563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.335733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.335765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.335897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.335928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.336100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.336132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.336397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.336430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.336624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.336656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.336763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.336795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.336966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.336998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.337189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.337223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.337412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.337445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.337567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.337599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.337803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.337836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.338095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.338126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.338449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.338484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.338670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.338702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.338886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.338917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.339098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.339130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.339326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.339359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.339527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.339559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.339741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.339773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.339886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.339918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.340126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.340158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.340293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.340326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.340590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.340622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.340861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.340892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.341016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.341050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.341190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.341223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.341401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.341435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.341558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.341589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.625 qpair failed and we were unable to recover it. 00:28:57.625 [2024-12-10 05:53:45.341785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.625 [2024-12-10 05:53:45.341817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.341930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.341961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.342079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.342111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.342253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.342292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.342532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.342564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.342675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.342706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.342890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.342921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.343052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.343083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.343190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.343223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.343482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.343513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.343621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.343652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.343827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.343859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.344031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.344062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.344175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.344207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.344313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.344345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.344535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.344567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.344673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.344703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.344913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.344945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.345139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.345182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.345298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.345330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.345447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.345480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.345675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.345708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.345902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.345934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.346117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.346150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.346261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.346294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.346409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.346440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.346616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.346648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.346913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.346947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.347122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.347152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.347348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.347380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.347491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.347529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.347648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.347678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.347887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.347918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.348103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.348134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.348334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.348368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.348556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.348593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.348841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.348872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.349041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.349071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.349208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.626 [2024-12-10 05:53:45.349239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.626 qpair failed and we were unable to recover it. 00:28:57.626 [2024-12-10 05:53:45.349422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.349451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.349552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.349583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.349819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.349849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.350037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.350068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.350217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.350259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.350402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.350433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.350612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.350642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.350820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.350850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.351024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.351054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.351290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.351322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.351439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.351469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.351584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.351615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.351816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.351847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.351987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.352018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.352214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.352246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.352378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.352408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.352603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.352633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.352740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.352771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.352996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.353028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.353144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.353205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.353399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.353431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.353544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.353576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.353694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.353726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.353917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.353950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.354062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.354094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.354281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.354314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.354431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.354462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.354573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.354603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.354778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.354810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.354984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.355014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.355204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.355235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.355346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.355377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.355479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.355516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.355633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.355663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.355835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.355865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.356121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.356152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.356296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.356327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.356446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.356476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.356681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.356712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.356816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.627 [2024-12-10 05:53:45.356846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.627 qpair failed and we were unable to recover it. 00:28:57.627 [2024-12-10 05:53:45.356973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.357003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.357202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.357236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.357354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.357387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.357493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.357525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.357638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.357669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.357786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.357817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.357957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.357988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.358131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.358164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.358399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.358433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.358550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.358582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.358772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.358804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.358920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.358951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.359070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.359100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.359220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.359252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.359425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.359456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.359570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.359599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.359836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.359865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.359989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.360020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.360272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.360303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.360421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.360462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.360593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.360624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.360815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.360845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.360952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.360982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.361284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.361316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.361454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.361485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.361613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.361643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.361758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.361788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.361901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.361931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.362065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.362095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.362281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.362312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.362493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.362523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.362629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.362659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.362923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.362953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.363087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.363118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.363320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.363353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.363532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.363563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.363822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.363853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.364029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.364061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.364239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.364274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.364460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.364492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.364683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.364716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.364830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.364861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.628 [2024-12-10 05:53:45.365097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.628 [2024-12-10 05:53:45.365128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.628 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.365308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.365341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.365453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.365485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.365606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.365637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.365817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.365848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.365958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.365990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.366202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.366251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.366434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.366466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.366586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.366617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.366787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.366819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.367060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.367091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.367206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.367239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.367457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.367489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.367678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.367710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.367830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.367864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.368030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.368062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.368249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.368282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.368397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.368429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.368617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.368655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.368842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.368873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.369053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.369084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.369202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.369234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.369349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.369382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.369516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.369548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.369719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.369751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.369944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.369976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.370190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.370225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.370348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.370380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.370507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.370539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.370676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.370709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.371003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.629 [2024-12-10 05:53:45.371034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.629 qpair failed and we were unable to recover it. 00:28:57.629 [2024-12-10 05:53:45.371143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.371183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.371312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.371344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.371597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.371629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.371743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.371774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.371946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.371979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.372208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.372242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.372363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.372396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.372585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.372615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.372803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.372834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.373019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.373051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.373177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.373209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.373426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.373458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.373660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.373692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.373866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.373897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.374023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.374055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.374173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.374207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.374375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.374408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.374600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.374633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.374867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.374900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.375086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.375117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.375259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.375292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.375476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.375508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.375687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.375719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.375836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.375869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.376115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.376147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.376341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.376375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.376564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.376596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.376772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.376803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.376932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.376965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.377102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.377134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.377406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.377439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.377636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.377668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.377790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.377822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.377937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.377968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.378186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.378220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.378482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.378514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.378620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.378652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.378861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.378894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.379022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.379055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.379239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.379275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.379413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.630 [2024-12-10 05:53:45.379445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.630 qpair failed and we were unable to recover it. 00:28:57.630 [2024-12-10 05:53:45.379700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.379731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.379945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.379977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.380143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.380185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.380316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.380348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.380462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.380493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.380676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.380707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.380824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.380855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.381094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.381125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.381325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.381358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.381577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.381610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.381723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.381756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.381899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.381932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.382036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.382068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.382212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.382246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.382421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.382460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.382577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.382609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.382736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.382769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.382941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.382973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.383099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.383131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.383326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.383360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.383475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.383507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.383735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.383766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.383888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.383920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.384031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.384063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.384196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.384229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.384411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.384442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.384628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.384660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.384774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.384805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.384982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.385013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.385134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.385165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.385301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.385332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.385460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.385492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.385618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.385649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.385890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.385922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.386029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.386060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.386181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.386214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.386383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.386415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.386528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.386561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.386676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.631 [2024-12-10 05:53:45.386709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.631 qpair failed and we were unable to recover it. 00:28:57.631 [2024-12-10 05:53:45.386811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.386844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.386946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.386980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.387111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.387143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.387279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.387313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.387439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.387471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.387646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.387677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.387787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.387819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.387933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.387966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.388155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.388196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.388322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.388354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.388464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.388496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.388687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.388719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.388853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.388884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.389055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.389087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.389190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.389226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.389433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.389463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.389643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.389715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.389924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.389960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.390086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.390120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.390327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.390361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.390489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.390522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.390703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.390734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.390943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.390975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.391153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.391200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.391314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.391345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.391467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.391499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.391679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.391711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.391846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.391878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.391994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.392026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.392204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.392238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.392397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.392429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.392601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.392633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.392736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.392768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.392908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.392939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.393117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.393149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.393285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.393318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.393454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.393487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.393680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.393713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.393958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.393990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.394114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.394147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.394282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.394315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.394425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.394457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.394559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.394592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.394863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.394897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.632 [2024-12-10 05:53:45.395028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.632 [2024-12-10 05:53:45.395059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.632 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.395184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.395217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.395411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.395445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.395548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.395580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.395701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.395734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.395841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.395874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.395989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.396023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.396146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.396188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.396304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.396338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.396515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.396547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.396656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.396690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.396809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.396841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.396948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.396986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.397089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.397120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.397252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.397285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.397458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.397490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.397597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.397629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.397894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.397927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.398061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.398093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.398201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.398234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.398351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.398382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.398568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.398600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.398751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.398783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.398967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.398998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.399116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.399148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.399271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.399305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.399435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.399467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.399648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.399684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.399793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.399824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.400083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.400115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.400242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.400273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.400462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.400491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.633 [2024-12-10 05:53:45.400607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.633 [2024-12-10 05:53:45.400636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.633 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.400766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.400796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.400915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.400946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.401053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.401083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.401202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.401233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.401350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.401380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.401638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.401668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.401845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.401875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.401981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.402012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.402118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.402148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.402346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.402377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.402560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.402590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.402719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.402748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.402865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.402895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.403015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.403047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.403262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.403294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.403402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.403434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.403563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.403594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.403715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.403747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.403880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.403912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.404114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.404151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.404283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.404314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.404423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.404453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.404580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.404609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.404782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.404812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.404928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.404958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.405137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.405173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.405278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.405308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.405417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.405448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.405624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.405654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.405857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.405887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.405990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.406020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.406125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.406155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.406357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.406389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.406658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.406690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.406796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.406827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.406931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.406963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.407077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.407110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.407357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.407391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.407521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.407553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.407661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.407692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.634 [2024-12-10 05:53:45.407889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.634 [2024-12-10 05:53:45.407919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.634 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.408051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.408081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.408191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.408223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.408394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.408423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.408541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.408571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.408746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.408776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.408960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.408991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.409103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.409132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.409271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.409302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.409477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.409509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.409682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.409712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.409882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.409912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.410031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.410062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.410274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.410305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.410419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.410451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.410572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.410606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.410724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.410758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.410871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.410903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.411019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.411050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.411150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.411198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.411374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.411405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.411619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.411651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.411835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.411866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.412039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.412074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.412199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.412231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.412350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.412380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.412510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.412540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.412659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.412689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.412856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.412886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.412992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.413021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.413154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.413192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.413343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.413374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.413489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.413519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.413652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.413682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.413866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.413897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.414012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.414042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.414227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.414259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.414437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.414471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.414652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.414683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.414797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.414829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.414950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.414983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.415090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.415122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.635 qpair failed and we were unable to recover it. 00:28:57.635 [2024-12-10 05:53:45.415239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.635 [2024-12-10 05:53:45.415271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.415445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.415477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.415713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.415744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.415927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.415958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.416133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.416165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.416350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.416382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.416501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.416532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.416709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.416741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.416857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.416890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.416997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.417029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.417159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.417204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.417330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.417360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.417482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.417514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.417719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.417750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.417875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.417907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.418080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.418112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.418366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.418399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.418515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.418552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.418667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.418698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.418877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.418908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.419079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.419112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.419245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.419278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.419405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.419437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.419622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.419654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.419844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.419875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.419984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.420016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.420200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.420235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.420420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.420453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.420644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.420675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.420782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.420813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.420984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.421015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.421127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.421158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.421295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.421327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.421429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.421461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.421662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.421694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.421901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.421932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.422178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.422211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.422326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.422359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.422547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.422579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.422682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.422714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.422895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.422927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.423044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.636 [2024-12-10 05:53:45.423075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.636 qpair failed and we were unable to recover it. 00:28:57.636 [2024-12-10 05:53:45.423189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.423221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.423430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.423462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.423586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.423618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.423813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.423845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.424082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.424113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.424295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.424328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.424470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.424502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.424682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.424713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.424887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.424918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.425099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.425131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.425264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.425297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.425420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.425451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.425553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.425584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.425686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.425718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.425833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.425864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.426038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.426075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.426211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.426244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.426415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.426447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.426554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.426586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.426793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.426824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.426960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.426992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.427114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.427145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.427330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.427362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.427601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.427633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.427745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.427776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.427954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.427986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.428174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.428207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.428396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.428428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.428548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.428580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.428719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.428752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.428930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.428961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.429202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.429234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.429404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.429436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.429551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.429583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.637 [2024-12-10 05:53:45.429759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.637 [2024-12-10 05:53:45.429790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.637 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.429900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.429932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.430033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.430064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.430303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.430336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.430459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.430490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.430595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.430626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.430804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.430835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.431009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.431041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.431239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.431272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.431388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.431421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.431600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.431631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.431871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.431903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.432079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.432111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.432305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.432337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.432521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.432552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.432672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.432704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.432826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.432857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.433046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.433077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.433201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.433235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.433361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.433393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.433523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.433554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.433756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.433794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.433900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.433932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.434041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.434072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.434328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.434361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.434538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.434569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.434680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.434712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.434834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.434865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.435039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.435071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.435187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.435223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.435486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.435518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.435632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.435664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.435780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.435811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.436049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.436080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.436209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.638 [2024-12-10 05:53:45.436241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.638 qpair failed and we were unable to recover it. 00:28:57.638 [2024-12-10 05:53:45.436450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.436482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.436702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.436735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.436877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.436909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.437078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.437109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.437243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.437275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.437396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.437427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.437612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.437643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.437882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.437913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.438117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.438148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.438381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.438414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.438596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.438627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.438728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.438759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.438894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.438925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.439052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.439084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.439254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.439287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.439390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.439421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.439635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.439666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.439849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.439880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.440076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.440107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.440294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.440328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.440510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.440541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.440679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.440710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.440880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.440911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.441030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.441061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.441163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.441202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.441324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.441355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.441479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.441517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.441704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.441735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.442018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.442049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.442267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.442300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.442403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.442434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.442563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.442595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.442781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.442813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.442934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.442965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.443155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.443198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.443314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.443346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.443584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.443616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.443747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.443779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.444046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.444077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.444265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.444298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.444427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.444458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.444638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.444670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.444783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.639 [2024-12-10 05:53:45.444815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.639 qpair failed and we were unable to recover it. 00:28:57.639 [2024-12-10 05:53:45.444981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.445012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.445274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.445308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.445410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.445441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.445561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.445592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.445724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.445756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.445945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.445976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.446096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.446128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.446241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.446273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.446413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.446445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.446566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.446598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.446781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.446852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.447048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.447084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.447326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.447361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.447539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.447572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.447792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.447824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.448012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.448043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.448179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.448212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.448388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.448420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.448539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.448570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.448679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.448710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.448908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.448939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.449057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.449089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.449208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.449241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.449459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.449501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.449711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.449742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.449932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.449963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.450066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.450097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.450364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.450397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.450654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.450685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.450925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.450956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.451074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.451105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.451228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.451261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.451515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.451547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.451735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.451767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.451887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.451918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.452108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.452139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.452306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.452378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.452577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dd0f0 is same with the state(6) to be set 00:28:57.640 [2024-12-10 05:53:45.452801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.452836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.452957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.452989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.453095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.453126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.453335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.453368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.453473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.453505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.453689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.453720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.640 [2024-12-10 05:53:45.453842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.640 [2024-12-10 05:53:45.453873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.640 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.454043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.454074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.454192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.454224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.454418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.454449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.454693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.454725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.454852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.454883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.454985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.455017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.455134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.455188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.455304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.455335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.455463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.455495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.455614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.455645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.455753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.455785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.455956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.455988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.456096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.456127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.456322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.456355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.456485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.456516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.456685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.456716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.456817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.456848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.456956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.456987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.457102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.457132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.457273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.457308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.457488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.457520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.457764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.457794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.457966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.457998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.458099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.458131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.458356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.458390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.458586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.458617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.458815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.458846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.458975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.459006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.459127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.459157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.459272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.459304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.459427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.459458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.459585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.459616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.459736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.459774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.459895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.459927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.460100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.460132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.460257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.460290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.460408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.460439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.460606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.460637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.460854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.460885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.461011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.461042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.461236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.461269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.461386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.461417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.461584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.461616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.461730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.461762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.461936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.461966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.462072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.462103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.462227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.462260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.641 qpair failed and we were unable to recover it. 00:28:57.641 [2024-12-10 05:53:45.462460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.641 [2024-12-10 05:53:45.462490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.642 qpair failed and we were unable to recover it. 00:28:57.642 [2024-12-10 05:53:45.462609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.642 [2024-12-10 05:53:45.462640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.642 qpair failed and we were unable to recover it. 00:28:57.642 [2024-12-10 05:53:45.462814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.642 [2024-12-10 05:53:45.462845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.642 qpair failed and we were unable to recover it. 00:28:57.642 [2024-12-10 05:53:45.463054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.642 [2024-12-10 05:53:45.463085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.642 qpair failed and we were unable to recover it. 00:28:57.924 [2024-12-10 05:53:45.463260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.924 [2024-12-10 05:53:45.463293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.924 qpair failed and we were unable to recover it. 00:28:57.924 [2024-12-10 05:53:45.463398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.924 [2024-12-10 05:53:45.463429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.924 qpair failed and we were unable to recover it. 00:28:57.924 [2024-12-10 05:53:45.463604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.924 [2024-12-10 05:53:45.463635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.924 qpair failed and we were unable to recover it. 00:28:57.924 [2024-12-10 05:53:45.463752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.924 [2024-12-10 05:53:45.463783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.924 qpair failed and we were unable to recover it. 00:28:57.924 [2024-12-10 05:53:45.463899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.924 [2024-12-10 05:53:45.463931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.924 qpair failed and we were unable to recover it. 00:28:57.924 [2024-12-10 05:53:45.464118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.924 [2024-12-10 05:53:45.464149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.924 qpair failed and we were unable to recover it. 00:28:57.924 [2024-12-10 05:53:45.464270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.924 [2024-12-10 05:53:45.464301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.924 qpair failed and we were unable to recover it. 00:28:57.924 [2024-12-10 05:53:45.464561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.924 [2024-12-10 05:53:45.464592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.924 qpair failed and we were unable to recover it. 00:28:57.924 [2024-12-10 05:53:45.464776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.464809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.464916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.464948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.465057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.465087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.465208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.465242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.465438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.465471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.465582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.465614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.465744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.465775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.465951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.465983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.466116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.466147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.466370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.466402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.466582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.466614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.466715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.466746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.466854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.466885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.467009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.467047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.467230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.467262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.467389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.467420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.467595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.467626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.467728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.467759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.467904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.467936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.468064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.468094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.468302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.468335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.468453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.468484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.468606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.468636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.468749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.468780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.468894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.468924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.469038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.469068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.469209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.469243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.469366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.469399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.469516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.469548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.469662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.469693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.469796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.469828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.470014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.470045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.470174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.470205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.470395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.470428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.470615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.470646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.470781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.470813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.470923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.470954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.471058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.471089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.471262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.471295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.471400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.471431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.471570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.471603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.471715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.471747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.472019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.472049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.472154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.472194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.472338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.472369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.472481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.472513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.472626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.472658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.472764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.472796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.472969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.472999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.473102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.473133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.473274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.473307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.473424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.473455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.473568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.473599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.473713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.473751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.474012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.474043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.474159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.474203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.925 [2024-12-10 05:53:45.474396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.925 [2024-12-10 05:53:45.474429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.925 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.474535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.474566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.474677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.474708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.474827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.474858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.474961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.474992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.475100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.475131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.475246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.475278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.475474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.475505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.475612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.475643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.475762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.475792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.475901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.475932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.476042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.476074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.476306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.476341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.476475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.476506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.476694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.476725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.476838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.476870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.477040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.477070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.477181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.477213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.477329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.477361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.477533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.477563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.477677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.477708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.477824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.477855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.477973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.478005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.478108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.478139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.478346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.478377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.478491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.478523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.478654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.478685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.478794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.478825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.478945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.478976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.479086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.479117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.479247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.479279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.479463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.479494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.479607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.479638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.479820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.479852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.479959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.479990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.480190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.480223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.480338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.480368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.480475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.480513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.480691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.480724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.480833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.480864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.480976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.481007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.481113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.481144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.481309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.481341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.481517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.481548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.481724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.481754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.481928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.481960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.482072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.482102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.482209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.482242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.482377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.482408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.482524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.482555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.482675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.482707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.482823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.482854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.483026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.483057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.483231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.483263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.483369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.483400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.483583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.483615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.483742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.483772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.483886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.483917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.484031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.484062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.926 qpair failed and we were unable to recover it. 00:28:57.926 [2024-12-10 05:53:45.484327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.926 [2024-12-10 05:53:45.484361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.484471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.484503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.484625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.484657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.484772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.484803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.484993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.485024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.485146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.485187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.485358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.485390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.485524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.485556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.485659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.485689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.485861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.485892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.486027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.486058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.486253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.486286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.486471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.486499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.486629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.486657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.486760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.486787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.486952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.486980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.487215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.487245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.487354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.487383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.487505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.487539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.487641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.487670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.487778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.487807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.487992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.488022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.488121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.488150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.488272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.488302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.488405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.488434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.488607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.488636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.488763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.488792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.488911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.488939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.489053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.489081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.489280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.489310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.489505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.489534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.489765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.489794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.489901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.489930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.490043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.490071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.490200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.490230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.490408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.490436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.490600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.490628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.490728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.490756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.490934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.490962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.491126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.491155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.491291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.491321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.491432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.491461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.491558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.491587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.491684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.491712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.491829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.491858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.492090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.492159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.492377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.492414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.492591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.492624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.492728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.492759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.492882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.492914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.493040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.493071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.493262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.493296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.927 [2024-12-10 05:53:45.493424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.927 [2024-12-10 05:53:45.493455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.927 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.493642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.493674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.493796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.493828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.494079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.494110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.494379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.494412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.494589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.494621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.494826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.494858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.495077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.495110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.495233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.495267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.495388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.495419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.495655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.495686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.495808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.495840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.495979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.496009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.496268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.496301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.496483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.496514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.496633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.496664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.496838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.496870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.496989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.497021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.497218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.497251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.497503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.497535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.497654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.497691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.497804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.497836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.498012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.498044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.498305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.498338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.498518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.498549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.498680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.498711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.498887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.498919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.499045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.499077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.499263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.499296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.499407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.499438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.499677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.499710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.499889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.499921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.500041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.500072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.500244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.500277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.500544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.500576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.500768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.500799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.500911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.500943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.501124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.501155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.501288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.501320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.501512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.501544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.501658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.501690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.501873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.501906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.502011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.502042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.502163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.502205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.502318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.502349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.502525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.502556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.502729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.502760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.502881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.502924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.503051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.503083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.503193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.503226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.503419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.503451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.503687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.503718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.503862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.503893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.504072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.504103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.504206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.504238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.504372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.504404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.504506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.504537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.504733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.504765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.504867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.504898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.505022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.505054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.928 qpair failed and we were unable to recover it. 00:28:57.928 [2024-12-10 05:53:45.505157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.928 [2024-12-10 05:53:45.505200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.505343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.505390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.505609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.505643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.505756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.505789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.506050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.506081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.506209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.506243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.506370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.506401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.506587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.506618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.506735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.506767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.506939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.506971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.507158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.507200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.507318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.507350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.507479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.507511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.507621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.507653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.507768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.507808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.507982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.508015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.508130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.508162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.508287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.508320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.508558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.508590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.508857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.508889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.508999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.509031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.509145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.509188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.509362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.509395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.509525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.509558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.509661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.509692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.509818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.509850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.509971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.510002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.510189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.510221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.510467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.510500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.510617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.510649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.510827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.510859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.511044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.511076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.511201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.511234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.511443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.511475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.511649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.511681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.511858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.511891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.512014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.512045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.512152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.512194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.512391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.512423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.512550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.512581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.512707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.512739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.513006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.513043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.513246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.513280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.513466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.513498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.513624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.513656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.513841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.513872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.513978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.514009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.514135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.514177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.514287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.514319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.514498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.514530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.514786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.514817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.515036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.515068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.515261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.515293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.515552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.515583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.515707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.515745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.515869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.515900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.516028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.516060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.516324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.516357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.929 [2024-12-10 05:53:45.516459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.929 [2024-12-10 05:53:45.516490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.929 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.516614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.516645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.516778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.516810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.516921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.516952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.517067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.517098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.517234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.517269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.517446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.517478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.517662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.517692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.517798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.517830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.517939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.517970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.518153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.518194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.518367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.518398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.518544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.518575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.518691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.518722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.518839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.518870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.519048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.519079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.519190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.519222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.519333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.519363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.519483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.519514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.519630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.519661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.519762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.519792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.519900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.519931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.520187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.520221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.520404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.520436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.520694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.520725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.520927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.520958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.521137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.521178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.521283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.521314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.521499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.521530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.521664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.521695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.521878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.521909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.522185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.522218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.522338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.522369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.522487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.522518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.522643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.522673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.522795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.522826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.522937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.522974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.523087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.523118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.523243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.523276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.523457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.523488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.523731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.523763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.523893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.523924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.524092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.524123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.524263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.524296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.524416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.524447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.524709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.524740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.524851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.524883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.525064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.525095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.525204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.525238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.525407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.525438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.525629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.525661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.930 [2024-12-10 05:53:45.525860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.930 [2024-12-10 05:53:45.525892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.930 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.526024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.526054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.526223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.526256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.526374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.526405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.526582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.526613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.526730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.526761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.527021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.527051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.527185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.527218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.527481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.527512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.527700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.527731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.527863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.527893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.528022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.528053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.528198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.528231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.528417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.528448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.528580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.528611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.528725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.528754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.528882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.528913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.529196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.529229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.529402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.529433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.529616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.529647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.529842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.529874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.529984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.530015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.530126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.530157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.530287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.530320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.530503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.530534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.530678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.530716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.530894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.530925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.531032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.531063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.531195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.531227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.531352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.531384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.531494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.531526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.531651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.531682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.531781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.531813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.531937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.531968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.532072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.532104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.532219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.532252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.532432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.532462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.532647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.532679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.532851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.532883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.533001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.533032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.533177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.533210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.533333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.533364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.533572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.533603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.533718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.533750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.534020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.534052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.534161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.534201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.534382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.534414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.534599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.534631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.534759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.534791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.534982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.535014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.535205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.535238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.535356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.535387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.535592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.535623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.535822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.535854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.536118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.536149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.536272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.536303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.536507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.931 [2024-12-10 05:53:45.536537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.931 qpair failed and we were unable to recover it. 00:28:57.931 [2024-12-10 05:53:45.536651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.536683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.536787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.536818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.536932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.536964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.537068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.537100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.537280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.537312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.537411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.537442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.537554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.537585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.537701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.537733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.537934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.537966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.538096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.538128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.538242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.538275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.538399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.538430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.538620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.538651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.538783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.538815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.539016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.539047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.539158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.539199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.539322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.539354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.539534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.539565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.539744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.539775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.539893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.539924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.540041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.540072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.540271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.540304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.540414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.540445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.540567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.540598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.540845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.540876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.541046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.541077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.541184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.541217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.541400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.541434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.541601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.541632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.541817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.541848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.542112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.542144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.542289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.542322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.542558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.542589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.542757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.542788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.542914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.542944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.543073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.543118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.543246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.543279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.543397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.543428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.543680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.543712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.543909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.543940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.544058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.544091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.544330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.544363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.544534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.544567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.544828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.544859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.545043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.545074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.545200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.545233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.545413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.545444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.545554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.545586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.545698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.545729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.545836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.545868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.545973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.546004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.546214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.546248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.546444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.546476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.546607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.546639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.546811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.546842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.547030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.547061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.547164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.547203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.547378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.547410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.932 [2024-12-10 05:53:45.547539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.932 [2024-12-10 05:53:45.547571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.932 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.547695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.547726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.547895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.547926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.548163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.548206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.548330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.548362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.548567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.548598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.548725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.548756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.549021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.549052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.549162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.549203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.549327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.549359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.549488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.549520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.549758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.549789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.549926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.549958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.550211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.550244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.550445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.550478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.550583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.550614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.550794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.550825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.550946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.550984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.551194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.551226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.551352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.551383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.551491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.551523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.551695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.551726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.551962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.551994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.552179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.552211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.552383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.552414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.552524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.552556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.552742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.552773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.552889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.552921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.553110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.553141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.553371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.553403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.553577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.553608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.553757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.553788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.553981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.554012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.554215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.554248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.554362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.554393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.554604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.554635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.554828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.554859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.554966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.554997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.555113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.555144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.555328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.555360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.555463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.555495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.555608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.555638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.555759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.555790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.555920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.555952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.556071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.556103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.556278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.556310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.556483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.556514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.556616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.556647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.556774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.556805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.556981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.557012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.557193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.557225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.557335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.557366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.933 [2024-12-10 05:53:45.557536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.933 [2024-12-10 05:53:45.557567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.933 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.557829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.557859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.558033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.558063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.558247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.558279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.558454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.558485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.558602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.558640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.558835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.558866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.558992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.559023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.559161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.559205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.559374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.559405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.559526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.559557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.559738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.559770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.559872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.559903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.560019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.560050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.560233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.560266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.560372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.560403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.560572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.560603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.560769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.560800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.560988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.561019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.561133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.561164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.561441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.561472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.561679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.561710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.561826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.561857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.562117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.562148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.562434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.562466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.562599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.562630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.562843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.562874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.563074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.563105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.563278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.563311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.563556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.563587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.563755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.563786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.563904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.563935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.564042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.564073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.564264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.564297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.564494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.564526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.564704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.564735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.564867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.564897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.565037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.565068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.565307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.565340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.565590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.565622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.565747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.565778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.565967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.565999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.566113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.566144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.566412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.566443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.566685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.566716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.566840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.566877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.567003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.567033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.567232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.567265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.567386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.567417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.567647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.567678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.567845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.567877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.568114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.568145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.568282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.568314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.568560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.568591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.568798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.568829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.569029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.569058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.569187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.569218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.569394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.569424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.569684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.934 [2024-12-10 05:53:45.569712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.934 qpair failed and we were unable to recover it. 00:28:57.934 [2024-12-10 05:53:45.569888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.569918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.570117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.570147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.570307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.570337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.570468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.570497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.570616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.570645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.570761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.570791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.570963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.570993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.571184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.571215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.571336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.571366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.571482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.571512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.571702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.571731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.571859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.571889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.572057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.572089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.572199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.572230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.572432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.572462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.572640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.572670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.572859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.572889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.573074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.573103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.573346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.573378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.573563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.573593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.573766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.573795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.574059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.574090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.574350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.574382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.574513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.574543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.574665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.574694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.574821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.574851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.575112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.575148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.575338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.575368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.575542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.575573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.575689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.575718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.575906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.575937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.576131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.576163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.576359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.576390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.576626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.576657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.576886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.576918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.577105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.577137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.577317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.577350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.577466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.577497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.577705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.577736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.577879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.577917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.578208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.578241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.578504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.578535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.578780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.578811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.579051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.579081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.579257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.579290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.579549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.579581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.579758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.579789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.580006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.580037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.580335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.580368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.580543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.580573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.580674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.580705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.580844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.580876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.581048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.581079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.581277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.581310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.581573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.581604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.581727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.581758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.581878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.581909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.582178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.582210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.582347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.935 [2024-12-10 05:53:45.582379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.935 qpair failed and we were unable to recover it. 00:28:57.935 [2024-12-10 05:53:45.582550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.582582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.582843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.582874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.582983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.583014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.583198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.583231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.583435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.583466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.583634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.583665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.583910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.583942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.584111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.584151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.584331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.584362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.584603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.584634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.584848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.584878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.585113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.585144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.585332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.585364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.585568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.585599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.585842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.585872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.586056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.586087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.586288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.586321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.586494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.586525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.586708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.586739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.586923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.586954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.587055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.587086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.587267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.587300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.587413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.587444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.587627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.587659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.587829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.587863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.588126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.588157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.588335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.588368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.588474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.588505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.588632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.588663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.588864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.588895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.589177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.589210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.589452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.589483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.589656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.589687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.589814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.589845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.590036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.590066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.590247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.590279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.590483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.590515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.590684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.590715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.590851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.590883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.590988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.591019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.591136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.591175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.591382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.591414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.591607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.591638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.591810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.591841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.592020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.592052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.592246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.592279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.592383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.592414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.592686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.592723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.592847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.592878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.593059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.593092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.593283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.593316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.593595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.593625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.593798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.936 [2024-12-10 05:53:45.593830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.936 qpair failed and we were unable to recover it. 00:28:57.936 [2024-12-10 05:53:45.594021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.594053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.594319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.594351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.594459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.594490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.594682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.594713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.594890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.594921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.595119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.595151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.595277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.595309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.595497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.595527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.595708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.595740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.596001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.596032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.596217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.596250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.596422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.596454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.596721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.596752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.596938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.596968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.597070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.597101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.597221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.597253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.597555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.597586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.597716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.597747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.597931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.597963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.598153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.598190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.598432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.598464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.598693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.598765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.599058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.599093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.599282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.599316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.599586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.599618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.599809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.599840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.600081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.600112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.600299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.600333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.600454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.600485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.600749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.600781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.600906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.600938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.601115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.601147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.601397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.601429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.601639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.601671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.601796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.601837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.602050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.602081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.602333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.602367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.602550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.602581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.602772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.602804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.603007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.603038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.603294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.603327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.603454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.603485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.603668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.603699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.603870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.603901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.604077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.604109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.604322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.604355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.604530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.604561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.604701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.604733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.604856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.604888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.605150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.605191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.605398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.605431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.605536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.605568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.605694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.605725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.605966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.605998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.606179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.606212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.606398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.606430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.606608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.606641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.606825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.606855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.937 qpair failed and we were unable to recover it. 00:28:57.937 [2024-12-10 05:53:45.607021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.937 [2024-12-10 05:53:45.607053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.607234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.607268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.607456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.607487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.607663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.607734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.607940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.607976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.608194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.608229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.608424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.608455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.608580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.608612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.608810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.608841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.609012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.609043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.609216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.609249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.609439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.609470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.609590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.609620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.609818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.609849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.609982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.610014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.610118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.610149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.610289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.610322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.610507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.610540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.610714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.610746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.611009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.611041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.611157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.611198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.611444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.611475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.611656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.611687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.611929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.611960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.612178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.612211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.612410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.612442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.612624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.612655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.612844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.612875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.613076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.613107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.613325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.613358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.613527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.613565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.613740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.613772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.614007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.614038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.614255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.614288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.614507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.614540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.614783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.614814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.614985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.615016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.615141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.615183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.615425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.615456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.615559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.615591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.615829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.615861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.616043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.616074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.616317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.616350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.616606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.616638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.616779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.616811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.617041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.617072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.617256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.617289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.617499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.617530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.617759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.617790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.617974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.618005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.618131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.618163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.618278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.618310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.618479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.618510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.618692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.618724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.618989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.619021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.619214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.938 [2024-12-10 05:53:45.619246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.938 qpair failed and we were unable to recover it. 00:28:57.938 [2024-12-10 05:53:45.619498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.619529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.619717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.619754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.619963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.619994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.620184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.620217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.620403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.620435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.620562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.620593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.620854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.620886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.621011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.621042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.621231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.621264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.621454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.621486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.621772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.621804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.622048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.622080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.622283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.622316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.622554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.622585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.622795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.622827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.623070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.623102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.623281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.623314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.623488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.623520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.623627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.623658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.623774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.623805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.623986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.624018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.624200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.624233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.624401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.624433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.624611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.624642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.624828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.624858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.625044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.625076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.625193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.625226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.625396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.625427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.625548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.625580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.625774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.625807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.625943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.625974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.626100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.626132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.626330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.626363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.626603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.626634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.626808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.626839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.626960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.626991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.627206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.627239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.627362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.627393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.627520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.627552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.627663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.627695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.627897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.627928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.628112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.628143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.628271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.628309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.628478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.628509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.628748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.628780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.628881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.628913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.629117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.629149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.629348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.629381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.629497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.629527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.629717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.629748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.630013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.630044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.630147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.630190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.939 [2024-12-10 05:53:45.630474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.939 [2024-12-10 05:53:45.630506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.939 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.630733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.630764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.630898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.630929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.631055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.631086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.631299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.631332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.631461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.631492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.631759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.631790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.632001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.632032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.632269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.632302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.632423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.632454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.632621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.632653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.632866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.632897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.633134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.633174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.633301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.633333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.633612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.633643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.633848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.633879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.634088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.634120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.634321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.634355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.634552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.634585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.634830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.634861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.635071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.635102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.635345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.635379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.635484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.635516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.635796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.635827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.636017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.636049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.636182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.636214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.636402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.636433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.636721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.636752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.636868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.636899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.637022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.637054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.637245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.637278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.637507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.637579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.637711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.637748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.637920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.637953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.638122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.638154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.638343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.638377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.638582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.638613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.638911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.638942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.639191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.639224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.639411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.639443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.639566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.639597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.639788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.639820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.639939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.639971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.640085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.640116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.640369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.640412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.640665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.640696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.640806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.640838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.641078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.641109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.641324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.641357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.641533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.641565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.641755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.641786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.642043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.642075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.642259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.642294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.642476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.642508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.642623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.642654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.642823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.642854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.642986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.643017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.643263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.643296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.643486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.643518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.940 [2024-12-10 05:53:45.643704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.940 [2024-12-10 05:53:45.643737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.940 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.643943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.643974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.644161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.644201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.644440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.644472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.644591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.644622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.644745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.644776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.644978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.645010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.645218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.645250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.645368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.645400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.645532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.645563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.645799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.645830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.646031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.646063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.646195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.646230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.646406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.646438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.646639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.646670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.646857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.646888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.647086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.647118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.647338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.647371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.647579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.647610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.647869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.647900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.648122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.648154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.648298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.648330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.648444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.648476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.648604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.648636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.648825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.648856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.649031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.649063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.649198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.649231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.649404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.649436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.649617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.649649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.649767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.649799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.650082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.650113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.650251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.650284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.650406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.650438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.650554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.650586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.650844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.650876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.651003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.651034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.651219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.651252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.651443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.651476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.651665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.651696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.651941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.651973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.652096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.652128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.652248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.652280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.652482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.652513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.652734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.652765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.652892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.652923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.653185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.653218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.653323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.653354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.653557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.653588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.653831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.653863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.654065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.654096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.654235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.654269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.654466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.654498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.654703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.654740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.654880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.654911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.655186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.655220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.655397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.655427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.655626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.655658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.655865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.655896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.656070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.941 [2024-12-10 05:53:45.656101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.941 qpair failed and we were unable to recover it. 00:28:57.941 [2024-12-10 05:53:45.656289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.656322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.656505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.656536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.656673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.656705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.656914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.656945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.657211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.657244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.657482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.657513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.657743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.657776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.657950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.657982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.658198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.658231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.658421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.658452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.658638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.658670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.658860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.658892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.659077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.659108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.659384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.659417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.659668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.659699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.659872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.659904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.660192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.660225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.660399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.660431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.660614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.660645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.660838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.660869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.661007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.661038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.661220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.661253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.661383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.661415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.661584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.661616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.661844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.661876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.662110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.662143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.662401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.662433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.662679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.662711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.662899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.662931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.663201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.663233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.663438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.663470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.663721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.663753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.664044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.664076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.664333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.664391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.664607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.664639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.664748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.664780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.664997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.665028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.665202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.665236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.665351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.665382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.665499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.665531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.665792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.665824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.666020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.666051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.666154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.666202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.666333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.666365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.666551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.666583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.666846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.666878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.667000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.667031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.667209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.667243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.667436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.667467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.667726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.667758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.667871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.667903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.668082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.668113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.668305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.942 [2024-12-10 05:53:45.668338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.942 qpair failed and we were unable to recover it. 00:28:57.942 [2024-12-10 05:53:45.668518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.668550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.668723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.668754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.668876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.668908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.669106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.669138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.669338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.669371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.669576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.669608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.669791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.669823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.670011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.670043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.670312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.670346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.670617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.670650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.670820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.670852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.671066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.671097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.671249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.671282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.671477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.671509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.671632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.671664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.671914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.671946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.672076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.672108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.672306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.672339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.672516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.672548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.672721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.672753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.672861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.672898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.673078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.673110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.673312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.673346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.673530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.673562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.673780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.673812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.673915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.673946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.674206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.674239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.674418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.674451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.674690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.674722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.674957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.674989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.675181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.675214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.675422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.675454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.675657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.675688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.675875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.675907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.676106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.676139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.676323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.676355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.676611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.676643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.676896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.676928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.677044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.677075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.677259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.677291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.677483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.677515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.677633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.677664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.677915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.677947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.678119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.678151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.678360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.678393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.678658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.678689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.678824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.678856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.679150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.679193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.679383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.679415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.679606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.679639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.679813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.679844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.680030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.680062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.680321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.680353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.943 [2024-12-10 05:53:45.680543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.943 [2024-12-10 05:53:45.680575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.943 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.680779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.680811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.681018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.681049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.681266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.681299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.681471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.681503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.681739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.681770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.681872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.681903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.682070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.682107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.682401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.682435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.682631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.682663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.682908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.682940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.683186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.683219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.683481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.683512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.683681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.683713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.683955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.683986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.684175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.684208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.684391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.684423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.684538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.684570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.684688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.684719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.684978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.685010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.685211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.685245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.685428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.685460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.685667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.685699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.685907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.685940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.686139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.686185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.686440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.686472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.686654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.686686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.686927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.686958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.687130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.687161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.687346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.687379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.687504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.687536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.687719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.687750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.687952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.687984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.688182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.688215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.688352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.688385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.688589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.688621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.688891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.688922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.689096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.689128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.689329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.689362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.689559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.689591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.689775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.689807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.689978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.690009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.690247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.690280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.690519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.690551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.690676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.690708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.690971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.691003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.691189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.691222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.691408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.691448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.691567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.691599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.691788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.691820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.692004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.692037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.692149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.692190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.692362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.692394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.692658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.692690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.944 [2024-12-10 05:53:45.692869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.944 [2024-12-10 05:53:45.692901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.944 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.693185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.693217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.693338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.693369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.693471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.693503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.693708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.693740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.693928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.693960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.694071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.694103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.694249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.694282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.694398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.694430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.694529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.694561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.694684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.694716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.694911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.694942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.695108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.695140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.695262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.695295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.695437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.695469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.695602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.695634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.695899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.695931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.696148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.696188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.696425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.696457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.696631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.696663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.696801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.696833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.696942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.696974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.697225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.697258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.697380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.697412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.697616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.697647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.697908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.697940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.698113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.698145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.698338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.698370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.698624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.698656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.698762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.698793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.698913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.698945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.699149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.699191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.699376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.699407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.699606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.699643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.699841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.699874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.700115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.700146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.700344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.700377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.700551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.700583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.700846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.700877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.701049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.701081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.701201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.701234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.701424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.701455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.701571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.701602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.701734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.701766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.701882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.701913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.702096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.702128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.702424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.702459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.702655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.702688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.702920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.702952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.703069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.703101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.703256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.703289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.703498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.703530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.703797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.703829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.704007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.704039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.704162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.704201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.704381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.704413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.704597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.704629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.704798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.945 [2024-12-10 05:53:45.704829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.945 qpair failed and we were unable to recover it. 00:28:57.945 [2024-12-10 05:53:45.704963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.704995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.705185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.705218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.705351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.705382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.705509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.705541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.705665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.705696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.705879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.705911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.706027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.706058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.706309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.706343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.706453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.706484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.706678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.706710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.706976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.707007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.707199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.707232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.707356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.707388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.707558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.707590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.707843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.707876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.707994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.708036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.708152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.708192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.708303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.708335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.708523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.708555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.708727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.708758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.708942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.708975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.709081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.709113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.709306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.709339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.709456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.709488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.709598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.709630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.709742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.709774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.709953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.709985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.710211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.710244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.710530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.710562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.710740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.710772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.710900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.710932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.711100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.711131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.711377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.711410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.711532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.711564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.711825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.711856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.711970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.712002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.712260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.712292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.712473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.712505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.712739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.712772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.713039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.713070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.713191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.713224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.713337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.713368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.713539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.713571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.713807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.713839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.714009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.714041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.714281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.714314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.714522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.714554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.714723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.714755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.714942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.714974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.715228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.715261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.715458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.715489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.715671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.715702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.946 qpair failed and we were unable to recover it. 00:28:57.946 [2024-12-10 05:53:45.715871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.946 [2024-12-10 05:53:45.715903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.716090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.716123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.716301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.716334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.716532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.716570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.716737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.716769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.717004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.717036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.717224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.717257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.717496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.717527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.717715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.717747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.717948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.717980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.718158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.718211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.718453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.718485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.718633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.718665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.718774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.718807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.719065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.719096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.719270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.719304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.719434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.719466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.719711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.719744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.719857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.719889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.720083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.720115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.720302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.720335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.720485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.720516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.720698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.720731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.720937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.720969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.721079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.721111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.721405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.721438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.721679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.721710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.721840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.721872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.722125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.722156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.722353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.722385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.722572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.722605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.722774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.722806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.722996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.723027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.723255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.723288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.723416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.723448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.723579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.723610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.723780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.723812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.724026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.724059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.724342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.724375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.724492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.724523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.724737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.724769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.724959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.724990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.725155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.725202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.725319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.725356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.725565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.725597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.725715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.725748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.726007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.726038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.726223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.726256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.726431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.726463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.726648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.726680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.726933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.726964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.727216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.727250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.727423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.727456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.727642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.727673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.727848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.727880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.728117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.728150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.728329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.728360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.728487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.947 [2024-12-10 05:53:45.728519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.947 qpair failed and we were unable to recover it. 00:28:57.947 [2024-12-10 05:53:45.728717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.728749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.728958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.728989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.729161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.729200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.729322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.729354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.729490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.729521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.729725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.729756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.730019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.730050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.730251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.730284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.730464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.730496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.730612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.730643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.730812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.730843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.731038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.731070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.731335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.731370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.731546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.731578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.731714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.731746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.731882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.731914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.732044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.732075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.732255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.732288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.732485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.732517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.732634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.732665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.732935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.732967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.733082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.733114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.733240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.733272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.733444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.733475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.733727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.733759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.733943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.733980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.734115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.734146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.734355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.734387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.734554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.734586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.734795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.734827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.735094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.735126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.735311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.735344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.735469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.735501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.735633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.735664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.735835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.735866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.736050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.736082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.736187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.736220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.736408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.736440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.736711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.736742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.736953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.736985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.737165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.737206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.737337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.737368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.737562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.737593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.737778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.737809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.738022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.738054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.738223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.738256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.738445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.738477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.738663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.738695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.738981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.739013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.739223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.739256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.739439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.739471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.739645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.739676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.739919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.739951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.740078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.740110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.740382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.740416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.740607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.740639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.740823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.740855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.948 [2024-12-10 05:53:45.741109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.948 [2024-12-10 05:53:45.741141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.948 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.741267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.741298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.741541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.741572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.741756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.741787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.741993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.742024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.742265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.742298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.742558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.742590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.742829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.742860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.743029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.743067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.743265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.743298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.743478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.743510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.743695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.743726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.743981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.744013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.744190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.744222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.744422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.744455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.744714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.744746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.744944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.744976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.745176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.745209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.745476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.745509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.745689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.745721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.745850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.745881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.746085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.746116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.746391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.746425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.746599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.746630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.746833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.746865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.747067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.747099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.747269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.747303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.747533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.747565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.747677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.747709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.747858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.747890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.748074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.748105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.748303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.748336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.748512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.748544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.748810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.748842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.749038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.749070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.749224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.749258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.749395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.749426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.749611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.749643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.749831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.749863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.750046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.750078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.750203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.750236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.750422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.750454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.750693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.750724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.750928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.750960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.751069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.751101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.751275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.751308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.751494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.751526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.751650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.751682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.751943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.751985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.752190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.752224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.752459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.752491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.752673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.752705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.752913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.752945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.753132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.949 [2024-12-10 05:53:45.753164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.949 qpair failed and we were unable to recover it. 00:28:57.949 [2024-12-10 05:53:45.753348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.753379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.753550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.753582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.753827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.753860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.753984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.754016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.754149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.754199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.754440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.754472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.754648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.754679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.754796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.754827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.755047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.755079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.755207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.755240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.755379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.755411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.755598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.755630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.755827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.755859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.756062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.756094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.756218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.756251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.756443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.756475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.756645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.756676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.756871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.756903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.757147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.757186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.757309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.757341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.757517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.757549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.757738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.757770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.757973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.758005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.758302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.758336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.758529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.758561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.758740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.758772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.759007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.759038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.759299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.759332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.759514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.759546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.759670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.759702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.759885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.759917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.760133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.760188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.760381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.760413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.760647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.760678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.760896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.760928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.761122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.761154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.761460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.761493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.761694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.761726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.761912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.761943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.762128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.762160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.762383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.762415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.762593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.762625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.762751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.762783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.762900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.762932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.763099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.763130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.763335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.763369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.763551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.763583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.763788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.763820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.763933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.763965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.764196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.764230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.764485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.764518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.764701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.764732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.764917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.764949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.765075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.765107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.765217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.765250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.765356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.765387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.765630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.765661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.765829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.950 [2024-12-10 05:53:45.765860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.950 qpair failed and we were unable to recover it. 00:28:57.950 [2024-12-10 05:53:45.766092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.766123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.766310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.766343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.766526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.766558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.766822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.766859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.766995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.767027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.767311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.767344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.767469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.767502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.767717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.767748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.768010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.768042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.768307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.768341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.768547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.768579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.768702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.768733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.768929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.768961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.769179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.769212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.769390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.769421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.769633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.769665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.769925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.769957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.770288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.770321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.770528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.770560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.770731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.770763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.771050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.771081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.771261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.771294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.771506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.771538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.771666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.771697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.771955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.771986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.772273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.772306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.772498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.772530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.772789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.772821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.773007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.773039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.773228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.773260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.773456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.773487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.773667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.773698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.773812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.773844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.773961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.773992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.774216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.774248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.774488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.774520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.774642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.774674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.774773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.774804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.774985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.775016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.775189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.775222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.775335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.775366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.775551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.775583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.775820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.775852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.776036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.776073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.776256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.776289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.776529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.776561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.776735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.776766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.776932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.776963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.777203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.777236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.777496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.777528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.777708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.777740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.777953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.777985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.778179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.778212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.778391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.778422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.778710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.778742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.778928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.778960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.779160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.779201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.779466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.779500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.951 qpair failed and we were unable to recover it. 00:28:57.951 [2024-12-10 05:53:45.779678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-12-10 05:53:45.779711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.779973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.780005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.780186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.780220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.780343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.780376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.780563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.780595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.780835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.780867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.781050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.781083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.781342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.781375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.781595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.781627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.781819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.781850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.782060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.782091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.782374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.782407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.782624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.782656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.782774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.782806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.782990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.783022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.783202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.783235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.783428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.783460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.783577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.783609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.783803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.783835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.784124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.784158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.784302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.784334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.784522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.784555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.784736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.784769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.784949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.784982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.785146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.785189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.785298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.785336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.785461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.785493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.785778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.785810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.786046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.786078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.786248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.786281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.786395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.786427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.786632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.786664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.786901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.786933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.787119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.787151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.787329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.787365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.787558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.787591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.787723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.787755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.787976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.788011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.788248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.788282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.788433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.788468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.788708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.788740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.788874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.788907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.789058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.789090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.789218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.789250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.789443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.789474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.789660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.789691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.789888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.789919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.790192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.790225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.790416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.790448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.790632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.790663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.790846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.790878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.791192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.791225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.791363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.791396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.952 [2024-12-10 05:53:45.791656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-12-10 05:53:45.791688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.952 qpair failed and we were unable to recover it. 00:28:57.953 [2024-12-10 05:53:45.791883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.953 [2024-12-10 05:53:45.791915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:57.953 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.792164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.792208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.792398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.792432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.792691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.792723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.792924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.792956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.793205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.793240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.793355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.793386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.793636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.793669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.793884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.793916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.794163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.794203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.794391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.794422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.794685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.794722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.794903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.794936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.795194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.795227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.795410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.795442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.795653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.795685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.795959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.795991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.796192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.796225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.796418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.796449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.796707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.796739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.796926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.796958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.797199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.797232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.797469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.797501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.797736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.797768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.797899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.797931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.798147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.798189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.798475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.798507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.230 qpair failed and we were unable to recover it. 00:28:58.230 [2024-12-10 05:53:45.798769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.230 [2024-12-10 05:53:45.798801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.799079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.799111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.799324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.799358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.799527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.799559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.799795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.799827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.800082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.800114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.800332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.800364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.800602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.800633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.800820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.800853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.801100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.801132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.801352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.801387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.801684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.801716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.801976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.802007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.802226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.802260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.802442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.802474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.802731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.802763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.803050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.803081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.803333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.803366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.803558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.803589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.803862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.803894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.804065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.804097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.804290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.804323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.804591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.804623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.804838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.804870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.805105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.805142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.805413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.805445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.805710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.805742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.805927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.805959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.806223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.806256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.806541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.806574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.806845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.806877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.807069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.807102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.807284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.807317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.807499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.807531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.807800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.807832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.808115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.808147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.808342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.808375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.808590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.808622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.808755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.808787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.231 qpair failed and we were unable to recover it. 00:28:58.231 [2024-12-10 05:53:45.808991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.231 [2024-12-10 05:53:45.809024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.809261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.809294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.809502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.809534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.809795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.809827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.810063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.810095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.810374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.810408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.810653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.810685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.810878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.810910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.811124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.811155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.811376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.811409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.811647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.811679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.811942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.811973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.812219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.812253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.812496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.812528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.812652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.812685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.812947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.812980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.813265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.813299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.813572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.813605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.813792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.813824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.814004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.814040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.814161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.814204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.814451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.814485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.814707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.814741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.814949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.814983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.815180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.815221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.815368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.815408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.815679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.815713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.816036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.816070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.816254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.816288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.816473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.816507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.816707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.816757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.817017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.817052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.817322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.817358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.817638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.817673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.817945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.817979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.818240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.818276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.818451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.818490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.818753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.818791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.232 [2024-12-10 05:53:45.819078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.232 [2024-12-10 05:53:45.819128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.232 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.819432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.819473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.819610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.819642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.819922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.819957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.820162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.820222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.820501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.820536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.820743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.820777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.820967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.821005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.821251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.821285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.821464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.821497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.821780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.821813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.822099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.822132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.822408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.822442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.822728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.822761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.823049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.823083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.823354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.823387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.823508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.823540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.823809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.823842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.824109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.824142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.824332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.824366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.824617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.824650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.824841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.824874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.825140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.825188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.825457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.825493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.825732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.825784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.826043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.826077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.826426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.826496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.826780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.826826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.827091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.827124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.827351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.827385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.827651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.827683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.827994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.828026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.828274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.828307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.828516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.828548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.828803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.828835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.829073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.829103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.829291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.829324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.829562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.829594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.829711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.233 [2024-12-10 05:53:45.829742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.233 qpair failed and we were unable to recover it. 00:28:58.233 [2024-12-10 05:53:45.829936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.829968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.830231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.830263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.830532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.830563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.830824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.830856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.831110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.831142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.831436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.831468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.831744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.831776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.831986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.832018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.832205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.832239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.832452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.832484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.832726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.832758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.832983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.833017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.833217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.833251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.833488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.833520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.833647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.833678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.833936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.834007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.834268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.834305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.834543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.834575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.834763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.834795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.835031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.835062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.835246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.835279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.835470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.835503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.835762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.835793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.836048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.836080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.836337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.836371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.836609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.836641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.836884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.836915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.837155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.837198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.837456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.837498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.837745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.837777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.837959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.837991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.838275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.838308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.838576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.234 [2024-12-10 05:53:45.838607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.234 qpair failed and we were unable to recover it. 00:28:58.234 [2024-12-10 05:53:45.838814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.838846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.839085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.839117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.839325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.839358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.839618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.839649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.839894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.839926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.840134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.840177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.840425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.840456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.840707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.840739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.840949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.840981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.841240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.841274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.841518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.841550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.841722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.841754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.842000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.842032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.842271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.842304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.842590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.842623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.842889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.842919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.843136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.843174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.843345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.843378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.843644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.843675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.843960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.843992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.844259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.844292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.844576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.844608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.844848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.844916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.845117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.845152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.845354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.845387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.845623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.845655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.845842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.845873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.846062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.846093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.846261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.846294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.846541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.846572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.846819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.846850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.847109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.847140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.847437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.847470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.847710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.847742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.847875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.847906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.848080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.848111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.848336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.848370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.848566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.848597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.848785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.848816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.235 qpair failed and we were unable to recover it. 00:28:58.235 [2024-12-10 05:53:45.849078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.235 [2024-12-10 05:53:45.849110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.849248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.849281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.849463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.849495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.849754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.849786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.849974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.850006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.850241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.850273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.850557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.850589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.850852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.850884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.851182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.851215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.851424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.851455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.851693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.851731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.852030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.852062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.852267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.852299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.852538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.852570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.852805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.852837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.853075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.853106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.853367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.853401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.853683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.853715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.853974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.854006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.854302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.854335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.854473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.854504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.854744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.854776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.855012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.855044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.855222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.855256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.855522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.855555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.855843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.855875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.856122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.856153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.856452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.856484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.856747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.856778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.857072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.857103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.857312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.857345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.857586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.857617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.857854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.857885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.858122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.858154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.858350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.858382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.858644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.858675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.858877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.858909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.859160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.859215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.859437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.859469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.236 qpair failed and we were unable to recover it. 00:28:58.236 [2024-12-10 05:53:45.859727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.236 [2024-12-10 05:53:45.859758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.860005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.860037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.860296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.860329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.860521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.860552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.860740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.860771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.860947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.860979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.861189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.861221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.861493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.861524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.861734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.861765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.862012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.862044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.862246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.862279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.862480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.862512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.862731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.862771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.863044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.863077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.863333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.863368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.863632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.863663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.863924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.863956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.864244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.864276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.864494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.864525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.864765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.864797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.865058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.865087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.865371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.865404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.865683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.865715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.865978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.866009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.866195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.866227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.866490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.866529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.866666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.866697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.866981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.867012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.867184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.867217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.867474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.867506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.867790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.867822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.868109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.868141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.868335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.868366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.868619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.868651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.868860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.868892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.869095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.869127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.869582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.869626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.237 [2024-12-10 05:53:45.869912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-12-10 05:53:45.869949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.237 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.870141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.870181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.870450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.870483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.870755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.870786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.871047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.871078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.871362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.871395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.871660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.871691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.871982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.872013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.872289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.872323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.872507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.872538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.872751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.872783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.872966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.872997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.873257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.873290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.873474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.873505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.873674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.873705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.873917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.873955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.874156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.874196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.874380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.874412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.874670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.874702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.874988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.875019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.875219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.875252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.875490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.875522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.875798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.875830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.876096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.876127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.876341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.876375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.876585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.876617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.876802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.876834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.877013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.877045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.877233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.877266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.877459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.877496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.877679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.877711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.877966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.877998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.878188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.878221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.878400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.878431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.878610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.878641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.878771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.878802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.879058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.879089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.879332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.879365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.879487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.879518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.879791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-12-10 05:53:45.879822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.238 qpair failed and we were unable to recover it. 00:28:58.238 [2024-12-10 05:53:45.879931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.879962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.880198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.880231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.880512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.880550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.880791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.880823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.881007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.881039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.881326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.881359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.881629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.881660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.881870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.881902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.882092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.882122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.882268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.882301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.882538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.882570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.882807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.882838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.883076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.883108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.883374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.883406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.883613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.883654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.883892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.883922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.884118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.884150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.884376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.884408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.884647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.884679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.884947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.884979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.885218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.885251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.885515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.885546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.885789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.885820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.886013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.886044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.886270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.886303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.886636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.886668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.886939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.886971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.887161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.887202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.887453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.887484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.887712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.887784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.888012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.888047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.888225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.888260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.888526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.888558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.239 qpair failed and we were unable to recover it. 00:28:58.239 [2024-12-10 05:53:45.888745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-12-10 05:53:45.888777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.888965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.888996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.889205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.889238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.889498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.889529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.889710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.889742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.889931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.889963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.890225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.890258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.890519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.890551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.890801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.890832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.891068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.891109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.891288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.891322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.891583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.891614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.891849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.891881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.892064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.892096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.892388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.892421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.892682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.892714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.892904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.892935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.893193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.893227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.893516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.893548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.893793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.893827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.894129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.894161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.894423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.894456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.894636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.894668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.894863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.894895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.895088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.895119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.895315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.895348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.895619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.895650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.895930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.895961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.896265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.896298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.896575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.896606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.896794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.896825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.896999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.897031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.897305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.897337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.897526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.897558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.897667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.897700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.897981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.898013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.898274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.898309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.898557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.898589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.898873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.898905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.240 [2024-12-10 05:53:45.899163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.240 [2024-12-10 05:53:45.899203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.240 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.899498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.899529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.899814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.899846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.900030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.900062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.900321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.900355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.900640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.900672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.900945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.900977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.901265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.901298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.901548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.901580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.901859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.901890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.902125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.902163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.902437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.902470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.902600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.902632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.902871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.902902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.903072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.903104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.903374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.903407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.903691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.903723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.903983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.904014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.904206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.904239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.904477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.904508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.904766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.904797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.905043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.905075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.905344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.905376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.905663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.905695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.905871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.905904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.906176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.906208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.906409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.906441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.906701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.906733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.906916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.906948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.907140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.907182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.907421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.907452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.907654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.907686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.907856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.907887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.908089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.908121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.908395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.908429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.908704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.908736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.909023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.909055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.909329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.909364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.909645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.909676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.241 [2024-12-10 05:53:45.909953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.241 [2024-12-10 05:53:45.909985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.241 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.910266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.910299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.910557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.910589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.910835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.910866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.911105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.911138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.911433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.911467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.911750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.911782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.912044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.912076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.912289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.912321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.912561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.912593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.912856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.912888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.913155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.913204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.913399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.913431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.913663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.913695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.913934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.913966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.914215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.914248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.914488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.914520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.914692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.914724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.914925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.914956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.915229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.915261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.915548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.915580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.915788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.915820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.916010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.916041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.916214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.916246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.916382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.916414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.916684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.916716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.916988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.917020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.917152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.917194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.917400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.917431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.917672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.917704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.917894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.917925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.918212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.918245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.918539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.918570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.918751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.918783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.919046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.919078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.919268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.919300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.919538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.919569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.919831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.919862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.920068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.920101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.242 [2024-12-10 05:53:45.920288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.242 [2024-12-10 05:53:45.920322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.242 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.920584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.920615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.920785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.920817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.920942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.920974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.921215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.921247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.921420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.921452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.921639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.921671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.921924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.921956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.922254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.922287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.922485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.922517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.922655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.922687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.922954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.922985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.923176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.923213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.923405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.923437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.923705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.923736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.923928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.923960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.924202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.924235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.924411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.924443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.924735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.924766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.925018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.925049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.925247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.925280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.925551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.925583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.925759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.925790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.925973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.926005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.926206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.926239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.926428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.926460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.926728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.926760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.926967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.926999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.927199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.927231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.927404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.927435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.927626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.927658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.927857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.927889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.928078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.928110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.928362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.928394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.928633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.928665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.243 qpair failed and we were unable to recover it. 00:28:58.243 [2024-12-10 05:53:45.928934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.243 [2024-12-10 05:53:45.928966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.929208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.929242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.929427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.929460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.929699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.929731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.929916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.929955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.930248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.930282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.930550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.930582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.930825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.930858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.931128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.931160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.931360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.931392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.931593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.931625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.931881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.931913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.932125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.932157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.932346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.932378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.932566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.932598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.932808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.932840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.933012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.933044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.933306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.933339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.933636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.933669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.933887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.933919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.934089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.934121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.934397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.934431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.934626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.934658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.934903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.934935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.935203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.935236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.935525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.935557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.935798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.935831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.936018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.936050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.936246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.936279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.936546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.936579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.936707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.936739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.937010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.937043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.937232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.937264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.937530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.937562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.937856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.937889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.938155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.938198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.938507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.938539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.244 [2024-12-10 05:53:45.938652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.244 [2024-12-10 05:53:45.938684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.244 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.938989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.939021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.939315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.939348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.939592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.939624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.939890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.939923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.940051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.940082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.940325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.940357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.940496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.940540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.940725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.940757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.941024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.941056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.941244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.941277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.941541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.941573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.941866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.941898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.942178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.942211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.942479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.942512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.942729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.942761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.942975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.943008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.943241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.943274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.943459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.943491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.943662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.943695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.943980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.944011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.944215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.944248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.944449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.944481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.944665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.944708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.944902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.944934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.945123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.945155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.945422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.945455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.945597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.945628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.945846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.945878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.946000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.946032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.946289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.946322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.946592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.946625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.946935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.946968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.947094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.947125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.947417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.947451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.947719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.947752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.947950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.947982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.948235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.948269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.245 [2024-12-10 05:53:45.948392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.245 [2024-12-10 05:53:45.948424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.245 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.948618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.948650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.948786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.948817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.949063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.949096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.949297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.949331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.949528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.949561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.949771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.949803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.949990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.950022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.950222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.950256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.950520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.950558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.950787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.950820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.951022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.951055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.951278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.951311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.951479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.951512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.951777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.951810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.951998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.952029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.952243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.952276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.952518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.952550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.952818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.952850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.953141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.953181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.953453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.953485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.953678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.953710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.953954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.953987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.954180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.954214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.954402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.954435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.954609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.954643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.954764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.954796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.954944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.954975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.955176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.955209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.955455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.955489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.955680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.955712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.955890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.955922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.956046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.956078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.956266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.956300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.956408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.956441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.956730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.956763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.957015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.957048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.957281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.957316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.957461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.957494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.957758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.246 [2024-12-10 05:53:45.957789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.246 qpair failed and we were unable to recover it. 00:28:58.246 [2024-12-10 05:53:45.957908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.957940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.958085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.958117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.958310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.958343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.958552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.958585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.958756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.958789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.958969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.959002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.959219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.959253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.959495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.959527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.959709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.959741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.959850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.959888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.960131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.960163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.960297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.960330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.960451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.960483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.960675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.960707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.960923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.960955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.961184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.961218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.961342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.961374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.961483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.961515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.961776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.961810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.962004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.962037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.962246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.962281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.962459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.962491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.962734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.962766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.962901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.962934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.963108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.963140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.963376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.963409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.963657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.963689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.963865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.963898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.964088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.964121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.964273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.964308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.964441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.964474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.964664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.964696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.964942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.964974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.965200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.965234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.965481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.965513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.965700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.965739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.965950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.247 [2024-12-10 05:53:45.965983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.247 qpair failed and we were unable to recover it. 00:28:58.247 [2024-12-10 05:53:45.966215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.966248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.966395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.966427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.966624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.966658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.966845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.966877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.967124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.967157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.967393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.967427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.967606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.967638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.967865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.967897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.968074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.968107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.968389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.968422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.968692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.968724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.968918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.968950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.969145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.969194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.969376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.969409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.969664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.969696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.969944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.969975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.970163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.970208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.970412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.970445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.970702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.970734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.970994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.971026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.971151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.971195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.971466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.971498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.971688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.971721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.971914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.971945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.972069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.972101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.972227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.972261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.972477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.972509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.972750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.972782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.973022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.973054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.973243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.973275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.973486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.973518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.973658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.973690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.973899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.973931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.974123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.248 [2024-12-10 05:53:45.974154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.248 qpair failed and we were unable to recover it. 00:28:58.248 [2024-12-10 05:53:45.974422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.974454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.974577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.974608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.974787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.974819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.975005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.975037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.975219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.975251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.975434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.975466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.975725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.975756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.975873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.975905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.976195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.976227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.976411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.976443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.976656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.976689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.976929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.976960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.977065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.977097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.977307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.977341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.977584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.977616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.977810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.977842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.978033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.978065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.978240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.978271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.978476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.978514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.978780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.978812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.978920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.978951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.979137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.979177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.979355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.979387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.979516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.979548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.979818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.979849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.980027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.980059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.980248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.980280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.980461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.980492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.980675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.980707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.980882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.980914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.981024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.981061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.981187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.981219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.981413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.981446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.981548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.981580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.981766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.981797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.981905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.981937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.982062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.982094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.982282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.982315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.982506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.249 [2024-12-10 05:53:45.982538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.249 qpair failed and we were unable to recover it. 00:28:58.249 [2024-12-10 05:53:45.982687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.982718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.982981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.983012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.983184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.983216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.983398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.983429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.983694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.983725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.983857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.983889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.984091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.984124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.984343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.984377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.984590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.984622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.984866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.984897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.985164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.985206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.985448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.985480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.985679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.985710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.985951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.985983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.986096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.986128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.986280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.986313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.986576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.986607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.986745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.986777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.986921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.986952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.987128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.987180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.987404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.987436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.987697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.987728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.987913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.987944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.988129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.988161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.988352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.988384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.988574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.988604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.988811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.988843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.989036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.989067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.989255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.989288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.989480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.989513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.989707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.989739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.989858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.989889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.990174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.990207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.990437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.990474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.990610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.990641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.990830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.990862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.991051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.991083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.991268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.991301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.991420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.250 [2024-12-10 05:53:45.991452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.250 qpair failed and we were unable to recover it. 00:28:58.250 [2024-12-10 05:53:45.991629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.991660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.991849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.991881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.992067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.992099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.992275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.992307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.992520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.992552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.992670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.992702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.992875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.992907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.993112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.993144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.993269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.993301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.993493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.993525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.993710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.993741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.993955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.993986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.994228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.994262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.994456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.994487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.994729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.994761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.995026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.995059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.995162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.995202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.995501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.995533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.995717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.995749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.996013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.996044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.996313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.996351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.996554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.996587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.996881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.996913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.997086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.997117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.997386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.997419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.997710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.997741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.998013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.998045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.998236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.998269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.998482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.998513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.998735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.998767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.998988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.999019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.999259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.999293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.999563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.999595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:45.999786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:45.999817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:46.000016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:46.000049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:46.000257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:46.000289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:46.000501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:46.000534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:46.000816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:46.000848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:46.001122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.251 [2024-12-10 05:53:46.001154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.251 qpair failed and we were unable to recover it. 00:28:58.251 [2024-12-10 05:53:46.001406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.001439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.001702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.001733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.001998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.002029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.002297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.002330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.002573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.002605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.002803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.002834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.003049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.003081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.003333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.003367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.003733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.003807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.004075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.004112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.004395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.004430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.004612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.004644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.004817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.004849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.005037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.005069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.005264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.005298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.005486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.005518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.005792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.005823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.006063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.006094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.006363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.006395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.006636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.006668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.007003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.007034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.007298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.007331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.007512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.007546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.007737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.007768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.007962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.007993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.008239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.008273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.008410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.008441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.008705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.008738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.008986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.009018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.009306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.009339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.009529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.009560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.009812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.009844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.010142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.010181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.010419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.010451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.010716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.252 [2024-12-10 05:53:46.010747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.252 qpair failed and we were unable to recover it. 00:28:58.252 [2024-12-10 05:53:46.010936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.010974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.011230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.011263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.011447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.011478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.011718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.011750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.012016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.012047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.012336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.012369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.012497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.012528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.012767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.012799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.013064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.013096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.013289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.013322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.013593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.013625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.013934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.013965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.014284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.014317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.014512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.014544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.014809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.014840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.015051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.015082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.015225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.015258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.015476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.015507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.015723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.015754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.015998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.016030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.016290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.016324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.016566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.016598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.016725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.016756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.017023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.017055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.017320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.017354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.017546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.017577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.017843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.017874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.018064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.018101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.018277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.018310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.018594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.018625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.018880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.018912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.019152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.019191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.019408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.019440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.019632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.019663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.019921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.019952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.020247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.020280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.020491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.020522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.020762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.020793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.021043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.253 [2024-12-10 05:53:46.021075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.253 qpair failed and we were unable to recover it. 00:28:58.253 [2024-12-10 05:53:46.021290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.021322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.021505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.021536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.021816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.021849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.022089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.022121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.022504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.022537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.022745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.022777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.023072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.023104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.023295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.023328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.023537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.023569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.023808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.023840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.023965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.023996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.024291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.024324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.024519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.024551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.024797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.024829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.025079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.025110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.025374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.025407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.025595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.025627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.025817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.025847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.026092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.026124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.026268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.026301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.026425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.026456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.026670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.026701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.026878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.026909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.027088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.027121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.027338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.027371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.027639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.027671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.027869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.027900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.028094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.028125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.028276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.028310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.028582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.028619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.028726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.028756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.029039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.029069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.029335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.029368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.029564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.029595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.029787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.029818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.029995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.030026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.030215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.030247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.030514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.030546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.030787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.254 [2024-12-10 05:53:46.030819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-12-10 05:53:46.031036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.031068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.031247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.031279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.031475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.031506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.031772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.031804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.032068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.032100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.032311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.032344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.032595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.032627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.032898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.032929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.033130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.033162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.033358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.033391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.033659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.033691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.033882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.033914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.034190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.034223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.034410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.034442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.034637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.034668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.034865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.034896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.035163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.035205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.035403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.035434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.035677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.035709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.035955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.035987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.036256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.036288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.036580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.036612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.036908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.036939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.037238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.037271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.037542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.037574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.037859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.037890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.038147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.038188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.038471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.038502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.038775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.038806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.039099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.039131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.039405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.039438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.039727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.039759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.040004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.040036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.040223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.040257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.040527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.040558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.040763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.040795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.041088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.041120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-12-10 05:53:46.041392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.255 [2024-12-10 05:53:46.041424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.041639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.041671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.041878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.041910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.042186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.042219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.042503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.042535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.042733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.042766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.042963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.042995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.043280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.043312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.043538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.043570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.043769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.043801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.044065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.044097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.044315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.044348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.044593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.044624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.044870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.044901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.045124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.045158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.045343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.045376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.045566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.045598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.045868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.045900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.046041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.046073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.046341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.046374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.046589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.046620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.046814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.046852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.047098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.047130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.047411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.047443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.047718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.047749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.048028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.048059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.048343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.048376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.048500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.048531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.048798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.048829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.049115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.049146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.049430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.049463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.049736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.049768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.049949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.049980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.050250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.050282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.050474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.050505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.050688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.050719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.050894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.050926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.051104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.051137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.051426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.051459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.051665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.256 [2024-12-10 05:53:46.051697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-12-10 05:53:46.051998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.052029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.052291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.052324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.052623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.052655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.052900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.052932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.053109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.053140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.053382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.053414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.053611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.053642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.053840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.053871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.054064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.054096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.054280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.054313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.054490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.054521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.054815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.054847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.055121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.055152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.055445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.055478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.055681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.055713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.055951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.055982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.056162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.056204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.056499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.056530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.056795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.056826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.057128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.057159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.057374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.057407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.057653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.057685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.058020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.058098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.058425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.058464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.058685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.058719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.058995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.059028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.059281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.059315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.059592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.059624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.059832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.059863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.060123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.060154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.060298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.060331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.060599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.060632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.060880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.060913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.061110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.061141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.257 qpair failed and we were unable to recover it. 00:28:58.257 [2024-12-10 05:53:46.061427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.257 [2024-12-10 05:53:46.061459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.061656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.061689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.061947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.061979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.062253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.062286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.062438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.062469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.062765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.062797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.062994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.063025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.063275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.063308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.063488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.063520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.063720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.063751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.064044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.064076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.064307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.064341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.064588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.064620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.064810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.064842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.065123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.065155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.065469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.065502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.065770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.065802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.066093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.066125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.066424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.066456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.066728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.066760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.067047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.067079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.067359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.067393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.067670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.067701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.067992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.068023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.068250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.068284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.068564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.068595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.068892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.068923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.069196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.069229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.069508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.069545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.069823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.069855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.070130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.070162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.070457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.070489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.070723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.070756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.071023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.071054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.071308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.071341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.071597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.071628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.071884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.071916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.072118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.072149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.072403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.072436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.258 [2024-12-10 05:53:46.072734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.258 [2024-12-10 05:53:46.072765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.258 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.072985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.073017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.073290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.073325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.073636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.073668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.073927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.073959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.074142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.074181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.074461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.074492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.074634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.074666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.074939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.074971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.075250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.075283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.075571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.075603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.075783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.075815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.076094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.076125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.076402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.076435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.076572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.076604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.076880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.076911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.077113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.077145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.077283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.077316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.077567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.077598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.077817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.077849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.078105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.078137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.078374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.078407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.078708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.078740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.079015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.079048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.079190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.079224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.079440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.079473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.079663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.079695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.079884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.079917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.080196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.080229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.080427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.080465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.080687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.080718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.080992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.081025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.081315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.081349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.081551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.081583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.081783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.081814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.082009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.082041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.082320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.082353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.082613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.082645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.082919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.082951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.259 [2024-12-10 05:53:46.083152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.259 [2024-12-10 05:53:46.083194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.259 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.083424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.083456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.083740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.083772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.083901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.083933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.084141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.084182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.084511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.084543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.084819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.084850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.085128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.085159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.085453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.085486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.085761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.085793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.086008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.086040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.086239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.086272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.086552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.086583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.086886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.086917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.087174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.087207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.087520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.087553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.087819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.087850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.088111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.088143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.088451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.088484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.088762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.088794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.089055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.089087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.089340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.089373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.089590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.089622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.089885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.089916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.090213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.090247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.090558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.090589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.090787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.090819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.091074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.091106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.091406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.091438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.091725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.091757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.092036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.092073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.092331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.092364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.092665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.092696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.092979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.093012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.093198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.093232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.093433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.093465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.093735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.093767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.094035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.094066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.094259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.094293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-12-10 05:53:46.094560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.260 [2024-12-10 05:53:46.094593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.094844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.094876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.095006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.095037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.095291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.095324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.095453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.095484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.095774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.095805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.096063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.096095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.096351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.096384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.096685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.096716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.096990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.097022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.097159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.097203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.097477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.097508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.097794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.097826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.098114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.098146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.098429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.098461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.098649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.098680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.098882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.098914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.099192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.099226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.099515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.099547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.099765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.099796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.100047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.100079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.100356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.100389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.100590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.100623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.100773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.100805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.101101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.101132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.101429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.101463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.101754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.101786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.102031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.102062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.102362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.102396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.102660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.102692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.102888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.102921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-12-10 05:53:46.103190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.261 [2024-12-10 05:53:46.103229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.103567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.103599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.103850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.103881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.104015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.104046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.104295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.104329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.104635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.104667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.104952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.104983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.105261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.105295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.105587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.105619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.105892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.105923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.106060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.106091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.106369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.106401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.106606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.106638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.106833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.106864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.107142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.107186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.107483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.107513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.107771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.107802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.108054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.108087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.108310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.108343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.108545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.108577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.108765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.108797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.108974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.109004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.109212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.109244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.109514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.109545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.109681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.109712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.109965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.109999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.110297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.110331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.110631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.110664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.110929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.110959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.111155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.111197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.538 [2024-12-10 05:53:46.111395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.538 [2024-12-10 05:53:46.111427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.538 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.111705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.111736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.111937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.111969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.112247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.112281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.112562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.112593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.112820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.112851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.113054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.113086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.113392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.113424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.113689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.113721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.114021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.114053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.114319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.114358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.114540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.114572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.114822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.114854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.114996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.115026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.115229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.115262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.115398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.115430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.115681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.115712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.115895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.115925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.116225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.116258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.116532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.116564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.116694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.116724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.116998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.117029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.117321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.117354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.117503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.117534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.117761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.117793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.118067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.118099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.118364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.118397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.118669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.118701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.118920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.118951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.119234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.119266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.119520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.119552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.119836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.119867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.120118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.120150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.120389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.120421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.120566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.120598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.120873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.120905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.121108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.121140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.121441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.121474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.121678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.121710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.122009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.539 [2024-12-10 05:53:46.122040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.539 qpair failed and we were unable to recover it. 00:28:58.539 [2024-12-10 05:53:46.122324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.122358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.122570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.122602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.122876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.122908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.123189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.123222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.123450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.123482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.123731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.123763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.124023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.124054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.124352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.124386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.124654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.124685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.124883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.124914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.125134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.125187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.125472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.125504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.125780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.125812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.126023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.126054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.126245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.126276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.126495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.126527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.126721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.126753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.126958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.126990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.127261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.127294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.127585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.127616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.127896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.127929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.128218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.128252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.128526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.128558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.128760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.128792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.129019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.129050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.129271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.129304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.129580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.129611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.129895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.129927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.130210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.130243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.130526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.130557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.130777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.130809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.131011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.131042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.131248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.131281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.131481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.131513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.131792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.131823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.131964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.131996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.132199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.132232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.132517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.132550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.132747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.540 [2024-12-10 05:53:46.132779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.540 qpair failed and we were unable to recover it. 00:28:58.540 [2024-12-10 05:53:46.133033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.133065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.133316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.133349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.133652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.133684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.133821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.133851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.134125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.134157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.134468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.134500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.134766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.134797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.134993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.135025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.135229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.135263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.135515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.135547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.135855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.135887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.136188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.136227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.136488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.136520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.136799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.136830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.137113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.137145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.137381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.137413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.137690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.137722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.137907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.137940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.138208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.138241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.138422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.138454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.138706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.138738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.139013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.139044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.139278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.139312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.139591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.139623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.139873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.139905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.140217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.140251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.140535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.140567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.140843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.140874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.141083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.141115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.141398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.141432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.141636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.141668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.141918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.141949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.142251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.142284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.142551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.142582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.142835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.142867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.143138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.143181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.143463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.143495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.541 [2024-12-10 05:53:46.143772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.541 [2024-12-10 05:53:46.143806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.541 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.144079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.144110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.144400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.144434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.144715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.144746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.145015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.145048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.145290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.145324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.145530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.145562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.145844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.145876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.146128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.146159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.146369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.146401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.146558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.146590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.146774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.146805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.147070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.147102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.147377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.147409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.147662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.147699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.147949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.147980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.148198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.148232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.148421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.148454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.148730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.148762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.148954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.148986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.149251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.149285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.149464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.149496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.149707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.149740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.150003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.150034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.150255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.150288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.150404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.150434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.150711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.150743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.150999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.151031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.151316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.151350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.151627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.151659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.151855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.151887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.152147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.152201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.542 qpair failed and we were unable to recover it. 00:28:58.542 [2024-12-10 05:53:46.152509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.542 [2024-12-10 05:53:46.152541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.152817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.152849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.153124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.153155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.153450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.153482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.153752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.153784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.153993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.154024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.154223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.154256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.154530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.154562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.154841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.154873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.155203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.155280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.155520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.155560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.155875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.155911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.156180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.156215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.156467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.156501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.156739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.156773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.157052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.157086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.157365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.157418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.157689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.157722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.157922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.157954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.158217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.158252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.158452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.158485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.158749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.158781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.158982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.159025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.159227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.159261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.159564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.159597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.159858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.159891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.160118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.160151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.160358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.160391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.160618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.160651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.160950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.161001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.161254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.161288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.161569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.161602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.161855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.161888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.162117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.162149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.162371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.162405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.162596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.162629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.162864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.162896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.163116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.163149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.163441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.163475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.543 qpair failed and we were unable to recover it. 00:28:58.543 [2024-12-10 05:53:46.163747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.543 [2024-12-10 05:53:46.163781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.164072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.164105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.164413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.164447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.164705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.164738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.164955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.164988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.165287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.165321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.165611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.165644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.165895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.165929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.166126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.166159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.166464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.166498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.166865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.166944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.167285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.167326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.167532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.167567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.167772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.167805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.168024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.168056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.168311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.168344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.168603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.168635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.168855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.168887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.169070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.169101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.169419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.169454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.169730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.169762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.169961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.169993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.170299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.170333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.170619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.170651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.170953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.170985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.171252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.171286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.171507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.171539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.171817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.171848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.172138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.172180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.172399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.172432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.172635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.172667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.172895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.172926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.173186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.173220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.173473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.173505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.173718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.173750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.173954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.173986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.174238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.174273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.174574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.174615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.174869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.174903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.175085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.544 [2024-12-10 05:53:46.175117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.544 qpair failed and we were unable to recover it. 00:28:58.544 [2024-12-10 05:53:46.175345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.175377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.175601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.175633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.175923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.175956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.176231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.176264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.176483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.176515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.176725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.176757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.177007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.177039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.177319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.177351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.177510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.177542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.177735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.177768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.177993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.178024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.178245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.178279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.178468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.178502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.178625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.178656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.178798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.178829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.179012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.179045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.179287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.179320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.179465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.179501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.179788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.179820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.180070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.180103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.180409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.180444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.180722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.180756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.180980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.181013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.181239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.181272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.181544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.181575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.181870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.181903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.182102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.182133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.182346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.182379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.182581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.182614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.182804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.182838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.183043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.183076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.183266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.183299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.183556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.183588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.183832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.183863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.183984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.184016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.184184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.184218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.184476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.184509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.184693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.184725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.185036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.545 [2024-12-10 05:53:46.185069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.545 qpair failed and we were unable to recover it. 00:28:58.545 [2024-12-10 05:53:46.185316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.185350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.185641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.185672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.185873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.185905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.186181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.186214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.186462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.186494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.186743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.186781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.186980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.187013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.187204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.187238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.187443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.187475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.187665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.187697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.187972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.188004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.188214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.188247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.188481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.188515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.188734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.188768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.189052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.189085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.189371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.189405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.189612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.189645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.189909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.189941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.190124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.190156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.190443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.190475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.190728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.190760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.191012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.191044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.191318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.191351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.191624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.191656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.191938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.191970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.192149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.192211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.192495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.192533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.192785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.192817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.193070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.193102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.193421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.193454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.193591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.193623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.193899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.193931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.194188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.194221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.194443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.194475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.194676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.194708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.194965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.194996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.195192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.195225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.195408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.195439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.195691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.195724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.546 qpair failed and we were unable to recover it. 00:28:58.546 [2024-12-10 05:53:46.195932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.546 [2024-12-10 05:53:46.195964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.196185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.196217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.196490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.196522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.196754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.196786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.196980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.197011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.197223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.197256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.197458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.197489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.197618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.197649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.197866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.197898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.198198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.198247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.198521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.198553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.198695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.198727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.198929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.198961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.199147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.199186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.199412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.199444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.199654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.199687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.199895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.199927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.200205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.200238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.200443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.200475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.200692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.200724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.200857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.200889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.201136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.201178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.201431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.201463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.201619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.201651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.201869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.201899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.202180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.202213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.202495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.202527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.202752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.202783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.203042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.203075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.203284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.203317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.203502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.203533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.203737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.203769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.203977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.547 [2024-12-10 05:53:46.204018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.547 qpair failed and we were unable to recover it. 00:28:58.547 [2024-12-10 05:53:46.204247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.204281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.204413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.204445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.204601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.204633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.204902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.204934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.205214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.205247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.205470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.205502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.205805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.205837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.205957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.205989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.206297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.206331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.206494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.206527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.206777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.206809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.207006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.207038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.207233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.207267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.207541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.207572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.207832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.207863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.208125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.208157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.208371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.208405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.208610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.208642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.208896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.208928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.209106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.209138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.209426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.209459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.209605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.209637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.209855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.209894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.210086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.210117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.210310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.210343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.210597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.210628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.210923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.210955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.211252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.211285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.211556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.211588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.211772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.211804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.211983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.212015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.212289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.212323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.212523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.212555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.212759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.212791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.212979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.213011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.213199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.213232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.213517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.213549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.213779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.213811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.214061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.214093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.548 qpair failed and we were unable to recover it. 00:28:58.548 [2024-12-10 05:53:46.214310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.548 [2024-12-10 05:53:46.214343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.214475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.214507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.214710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.214742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.214923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.214954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.215142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.215182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.215380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.215412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.215544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.215575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.215797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.215829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.216132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.216164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.216426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.216458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.216685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.216717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.216986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.217019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.217242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.217275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.217454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.217485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.217767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.217799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.218085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.218117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.218399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.218432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.218561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.218592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.218866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.218897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.219098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.219129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.219315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.219347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.219633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.219665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.219879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.219911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.220126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.220157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.220446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.220485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.220763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.220794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.220996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.221028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.221300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.221334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.221537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.221568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.221746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.221778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.222057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.222089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.222391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.222423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.222623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.222655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.222861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.222893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.223090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.223122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.223386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.223419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.223711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.223744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.224019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.224050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.224272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.224307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.224558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.549 [2024-12-10 05:53:46.224590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.549 qpair failed and we were unable to recover it. 00:28:58.549 [2024-12-10 05:53:46.224854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.224886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.225077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.225109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.225302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.225336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.225616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.225647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.225918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.225950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.226247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.226280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.226414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.226446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.226638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.226669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.226879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.226911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.227107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.227139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.227407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.227439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.227711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.227748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.227949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.227981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.228202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.228235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.228488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.228521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.228777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.228809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.229027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.229058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.229315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.229347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.229606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.229638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.229765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.229797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.230069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.230101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.230318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.230351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.230650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.230681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.230951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.230983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.231277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.231311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.231586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.231618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.231896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.231928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.232196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.232229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.232524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.232556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.232826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.232859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.233097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.233130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.233389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.233423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.233701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.233733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.234009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.234041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.234268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.234301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.234517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.234548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.234826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.234859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.235054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.235085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.235347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.235380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.550 [2024-12-10 05:53:46.235591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.550 [2024-12-10 05:53:46.235624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.550 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.235886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.235918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.236113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.236145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.236422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.236455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.236603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.236635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.236918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.236950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.237161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.237204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.237488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.237519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.237794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.237826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.238087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.238119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.238399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.238432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.238627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.238659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.238934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.238966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.239177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.239216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.239415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.239447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.239587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.239618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.239896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.239927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.240130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.240162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.240446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.240479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.240755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.240787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.241025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.241056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.241236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.241269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.241459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.241490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.241766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.241799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.242088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.242120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.242342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.242375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.242670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.242703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.242913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.242945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.243155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.243196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.243448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.243481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.243673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.243705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.243955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.243987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.244284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.244318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.244588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.244620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.244821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.244852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.551 [2024-12-10 05:53:46.244999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.551 [2024-12-10 05:53:46.245031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.551 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.245293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.245328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.245539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.245571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.245756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.245787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.246063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.246094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.246366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.246406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.246667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.246698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.246896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.246928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.247199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.247232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.247496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.247527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.247779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.247811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.248034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.248066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.248341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.248374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.248651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.248683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.248903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.248934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.249112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.249144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.249438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.249472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.249671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.249703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.249982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.250013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.250308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.250343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.250611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.250643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.250861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.250892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.251155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.251197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.251484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.251516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.251791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.251823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.252087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.252119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.252425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.252458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.252749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.252781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.253060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.253091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.253284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.253317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.253514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.253546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.253683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.253715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.253990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.254022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.254283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.254317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.254499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.254530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.254751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.254783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.255000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.255033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.255256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.255289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.255570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.255602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.552 [2024-12-10 05:53:46.255881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.552 [2024-12-10 05:53:46.255915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.552 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.256205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.256238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.256454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.256486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.256634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.256666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.256920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.256952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.257157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.257200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.257463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.257496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.257764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.257802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.258055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.258087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.258342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.258376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.258655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.258687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.258952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.258983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.259258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.259291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.259580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.259612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.259806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.259838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.260045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.260077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.260349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.260382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.260610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.260642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.260920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.260952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.261239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.261272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.261472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.261504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.261720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.261752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.262028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.262060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.262264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.262298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.262514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.262546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.262744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.262776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.262996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.263028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.263334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.263367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.263578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.263610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.263883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.263915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.264138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.264187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.264443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.264475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.264762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.264794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.265091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.265122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.265396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.265435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.265724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.265756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.266029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.266061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.266353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.266387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.266667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.266698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.553 qpair failed and we were unable to recover it. 00:28:58.553 [2024-12-10 05:53:46.266983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.553 [2024-12-10 05:53:46.267015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.267298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.267331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.267609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.267641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.267904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.267935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.268135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.268187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.268456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.268489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.268811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.268842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.269099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.269131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.269393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.269426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.269640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.269673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.269852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.269884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.270076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.270109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.270387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.270420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.270704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.270737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.271020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.271052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.271246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.271279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.271539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.271571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.271771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.271804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.272074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.272106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.272291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.272325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.272614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.272647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.272924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.272955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.273247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.273281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.273562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.273595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.273876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.273908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.274055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.274087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.274286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.274318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.274588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.274621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.274906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.274938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.275123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.275154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.275439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.275472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.275662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.275694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.275945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.275977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.276231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.276265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.276522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.276554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.276856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.276888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.277182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.277221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.277517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.277550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.277775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.554 [2024-12-10 05:53:46.277807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.554 qpair failed and we were unable to recover it. 00:28:58.554 [2024-12-10 05:53:46.278036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.278068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.278344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.278378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.278639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.278671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.278786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.278818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.279001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.279033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.279307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.279340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.279609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.279641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.279869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.279901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.280049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.280079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.280267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.280299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.280489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.280520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.280810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.280842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.281145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.281188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.281464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.281497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.281693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.281726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.282002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.282034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.282233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.282265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.282459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.282491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.282767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.282800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.282983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.283015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.283287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.283320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.283614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.283647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.283939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.283971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.284247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.284281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.284569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.284603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.284880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.284912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.285092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.285124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.285384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.285418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.285645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.285678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.285898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.285930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.286203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.286237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.286474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.286505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.286784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.286816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.287016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.287048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.287249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.287283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.287532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.287562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.287766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.287797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.288069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.288103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.288384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.288419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.555 qpair failed and we were unable to recover it. 00:28:58.555 [2024-12-10 05:53:46.288615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.555 [2024-12-10 05:53:46.288647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.288911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.288944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.289146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.289191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.289466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.289499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.289695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.289727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.289913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.289945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.290218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.290252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.290520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.290552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.290768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.290800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.291072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.291104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.291365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.291398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.291599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.291631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.291925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.291957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.292253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.292287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.292477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.292510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.292786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.292818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.293038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.293069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.293272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.293305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.293557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.293590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.293854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.293885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.294189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.294222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.294490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.294523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.294705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.294737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.294960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.294992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.295210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.295245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.295508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.295540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.295814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.295852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.296132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.296164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.296372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.296404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.296603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.296635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.296913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.296946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.297164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.297206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.297402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.297434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.297729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.297762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.556 [2024-12-10 05:53:46.298038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.556 [2024-12-10 05:53:46.298070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.556 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.298327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.298360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.298631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.298664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.298932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.298964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.299206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.299239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.299489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.299522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.299810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.299842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.300034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.300065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.300326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.300359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.300560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.300593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.300866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.300897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.301189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.301222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.301372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.301403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.301695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.301726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.301931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.301963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.302263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.302297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.302483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.302515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.302814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.302847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.303136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.303177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.303445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.303477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.303693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.303726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.303980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.304012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.304308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.304342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.304639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.304671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.304864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.304896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.305098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.305130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.305410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.305443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.305628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.305660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.305934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.305966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.306191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.306225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.306424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.306456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.306708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.306741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.307014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.307046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.307251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.307290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.307556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.307588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.307878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.307911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.308194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.308228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.308470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.308502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.308782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.308819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.309097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.557 [2024-12-10 05:53:46.309129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.557 qpair failed and we were unable to recover it. 00:28:58.557 [2024-12-10 05:53:46.309409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.309443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.309729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.309762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.310043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.310074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.310364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.310397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.310614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.310646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.310842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.310874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.311130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.311162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.311463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.311496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.311764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.311795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.312018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.312049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.312354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.312387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.312638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.312670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.312982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.313014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.313145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.313187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.313467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.313499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.313701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.313734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.313933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.313966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.314224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.314257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.314561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.314593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.314858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.314889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.315113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.315160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.315489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.315522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.315796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.315829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.316124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.316157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.316430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.316463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.316703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.316736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.317032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.317064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.317358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.317391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.317664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.317697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.317897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.317928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.318145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.318185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.318381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.318413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.318636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.318668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.318966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.318998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.319202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.319235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.319426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.319459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.319707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.319739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.319936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.319968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.320180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.320212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.558 qpair failed and we were unable to recover it. 00:28:58.558 [2024-12-10 05:53:46.320464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.558 [2024-12-10 05:53:46.320496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.320786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.320818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.321113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.321145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.321360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.321393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.321668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.321700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.321882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.321913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.322162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.322208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.322491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.322524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.322799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.322830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.323116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.323148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.323428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.323462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.323657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.323688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.323964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.323995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.324190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.324225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.324418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.324450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.324651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.324684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.324962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.324994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.325255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.325288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.325488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.325521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.325717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.325750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.325951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.325982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.326257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.326290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.326505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.326544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.326795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.326828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.327132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.327164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.327381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.327414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.327682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.327713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.327921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.327953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.328131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.328163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.328460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.328492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.328761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.328793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.329060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.329093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.329390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.329424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.329712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.329744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.330027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.330059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.330212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.330245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.330522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.330555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.330708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.330740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.330926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.330957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.331237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.559 [2024-12-10 05:53:46.331270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.559 qpair failed and we were unable to recover it. 00:28:58.559 [2024-12-10 05:53:46.331490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.331522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.331804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.331835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.332039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.332072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.332193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.332227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.332435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.332467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.332668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.332700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.332962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.332994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.333246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.333280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.333551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.333583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.333835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.333872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.334134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.334175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.334382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.334414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.334608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.334639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.334911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.334943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.335120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.335153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.335456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.335488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.335749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.335780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.336087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.336120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.336332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.336365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.336668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.336700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.336983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.337016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.337291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.337325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.337520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.337553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.337811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.337844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.338099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.338132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.338354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.338388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.338664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.338696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.338892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.338924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.339198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.339230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.339510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.339543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.339814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.339845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.340138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.340178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.340448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.340480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.340628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.340661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.340840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.560 [2024-12-10 05:53:46.340872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.560 qpair failed and we were unable to recover it. 00:28:58.560 [2024-12-10 05:53:46.341049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.341080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.341358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.341392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.341681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.341713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.341993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.342025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.342287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.342320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.342466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.342498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.342749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.342780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.343007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.343038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.343239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.343272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.343410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.343442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.343698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.343731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.343934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.343966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.344158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.344200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.344394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.344425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.344622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.344654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.344937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.344974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.345231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.345265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.345538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.345570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.345842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.345874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.346108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.346140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.346412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.346446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.346630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.346663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.346884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.346915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.347109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.347141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.347345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.347378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.347562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.347594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.347846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.347877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.348092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.348124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.348409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.348442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.348725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.348757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.348956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.348989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.349242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.349276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.349550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.349582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.349813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.349845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.350045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.350077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.350350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.350384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.350518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.350550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.350819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.350851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.351050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.351082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.561 [2024-12-10 05:53:46.351288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.561 [2024-12-10 05:53:46.351322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.561 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.351596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.351629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.351907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.351939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.352232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.352272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.352473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.352504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.352699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.352731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.352930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.352962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.353236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.353270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.353470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.353502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.353706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.353738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.354009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.354040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.354360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.354393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.354667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.354699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.354951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.354983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.355187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.355220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.355515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.355548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.355835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.355866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.356147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.356189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.356372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.356405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.356706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.356738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.357014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.357046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.357325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.357358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.357565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.357596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.357856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.357887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.358086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.358117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.358412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.358446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.358715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.358747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.358944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.358975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.359225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.359258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.359556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.359588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.359800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.359833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.360019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.360051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.360301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.360335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.360634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.360665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.360932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.360964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.361217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.361251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.361546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.361578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.361849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.361881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.362064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.362097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.362300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.562 [2024-12-10 05:53:46.362333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.562 qpair failed and we were unable to recover it. 00:28:58.562 [2024-12-10 05:53:46.362612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.362644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.363247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.363280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.363530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.363562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.363742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.363774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.363895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.363932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.364058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.364090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.364369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.364402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.364725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.364756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.365060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.365092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.365272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.365305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.365567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.365599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.365800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.365831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.366107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.366140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.366442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.366474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.366754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.366786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.367076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.367109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.367400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.367433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.367708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.367740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.367882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.367915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.368136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.368191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.368443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.368474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.368608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.368640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.368890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.368921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.369205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.369238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.369421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.369452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.369727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.369759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.370011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.370042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.370343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.370377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.370646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.370677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.370818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.370850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.371098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.371130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.371345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.371378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.371585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.371617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.371832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.371863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.372125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.372157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.372365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.372397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.372599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.372633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.372813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.372844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.373118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.373149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.563 [2024-12-10 05:53:46.373430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.563 [2024-12-10 05:53:46.373463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.563 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.373750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.373783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.374061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.374093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.374380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.374413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.374694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.374727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.374953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.374985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.375276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.375310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.375564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.375596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.375808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.375840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.375969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.376000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.376278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.376312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.376564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.376596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.376798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.376830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.377107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.377139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.377400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.377433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.377615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.377647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.377926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.377958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.378231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.378265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.378476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.378507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.378701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.378733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.378986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.379023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.379328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.379362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.379642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.379675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.379939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.379971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.380221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.380254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.380526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.380558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.380810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.380843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.381141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.381181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.381444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.381476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.381751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.381783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.382060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.382092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.382297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.382331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.382604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.382636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.382889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.382926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.383192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.383225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.383440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.383472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.383677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.383708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.383960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.383991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.384244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.384277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.384549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.384580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.564 qpair failed and we were unable to recover it. 00:28:58.564 [2024-12-10 05:53:46.384876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.564 [2024-12-10 05:53:46.384908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.385128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.385160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.385447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.385479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.385730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.385761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.385966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.385998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.386250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.386283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.386492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.386525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.386813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.386845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.387127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.387159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.387443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.387475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.387705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.387737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.387989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.388021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.388321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.388354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.388553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.388585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.388833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.388864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.389136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.389178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.389463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.389495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.389693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.389724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.389969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.390001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.390284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.390317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.390517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.390550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.390697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.390730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.391021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.391052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.391339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.391372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.391672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.391708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.391911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.391943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.392227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.392261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.392462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.392494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.392624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.392656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.392850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.392885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.393108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.393140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.393452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.393485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.393738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-12-10 05:53:46.393770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.565 qpair failed and we were unable to recover it. 00:28:58.565 [2024-12-10 05:53:46.393913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.393945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.394226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.394261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.394526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.394558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.394764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.394796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.395054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.395086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.395334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.395369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.395668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.395706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.395984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.396015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.396294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.396327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.396606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.396638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.396895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.396927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.397139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.397179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.397460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.397493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.397709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.397741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.398013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.398054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.398341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.398375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.398612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.398644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.398824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.398857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.399078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.399109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.399312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.399345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.399640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.399672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.399977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.400009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.400275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.400308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.400606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.400638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.400908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.400941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.401244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.401276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.401537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.401569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.401773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.401805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.402072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.402109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.402376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.402411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.402599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.402630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.402911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.402943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.403230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.403264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.403514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.403546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.403766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.403798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.404044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.404076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.404353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.404387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.404661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.404693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.404983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.405019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.566 [2024-12-10 05:53:46.405294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.566 [2024-12-10 05:53:46.405327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.566 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.405552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.405584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.405866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.405898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.406109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.406141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.406364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.406397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.406597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.406629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.406809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.406840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.407090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.407122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.407345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.407379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.407622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.407654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.407904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.407937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.408154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.408199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.408379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.408414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.408669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.408701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.408893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.408925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.409198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.409231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.409525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.409557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.409761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.409794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.409994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.410025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.410221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.410254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.410438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.410470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.410594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.410626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.410897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.410929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.411130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.411163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.411355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.411386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.411662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.411694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.411971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.412004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.567 [2024-12-10 05:53:46.412209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.567 [2024-12-10 05:53:46.412243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.567 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.412519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.412551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.412829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.412862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.413064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.413102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.413403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.413437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.413645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.413677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.413960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.413992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.414184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.414217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.414442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.414474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.414748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.414780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.414970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.415002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.415266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.415299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.415549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.415581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.415778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.415810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.416089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.416121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.416310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.416342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.416544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.416576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.416765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.416797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.417070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.417102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.417374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.417408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.417703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.417735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.418013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.418045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.418197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.418230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.418456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.418489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.418739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.418771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.418968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.419000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.419251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.419284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.419536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.419568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.419821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.844 [2024-12-10 05:53:46.419852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.844 qpair failed and we were unable to recover it. 00:28:58.844 [2024-12-10 05:53:46.420072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.420104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.420354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.420393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.420646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.420678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.420979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.421012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.421219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.421252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.421526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.421559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.421836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.421868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.422136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.422190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.422433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.422465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.422764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.422796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.423064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.423096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.423279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.423312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.423564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.423596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.423899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.423931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.424198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.424231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.424452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.424485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.424765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.424798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.425091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.425122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.425399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.425433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.425724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.425756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.426047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.426079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.426359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.426393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.426677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.426710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.426927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.426958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.427230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.427264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.427555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.427588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.427884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.427915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.428116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.428148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.428403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.428436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.428733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.428765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.428979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.429011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.429216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.429249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.429471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.429503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.429699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.429731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.429909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.429941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.430081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.430112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.430349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.430383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.430637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.430669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.430929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.430961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.431258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.845 [2024-12-10 05:53:46.431293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.845 qpair failed and we were unable to recover it. 00:28:58.845 [2024-12-10 05:53:46.431561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.431593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.431778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.431810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.432007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.432050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.432257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.432290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.432479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.432511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.432781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.432814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.433003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.433034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.433325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.433358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.433556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.433588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.433789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.433821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.434036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.434068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.434317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.434351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.434608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.434640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.434939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.434971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.435163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.435208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.435403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.435434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.435711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.435744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.435966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.435998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.436199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.436233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.436379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.436411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.436683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.436714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.436991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.437024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.437277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.437310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.437607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.437639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.437856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.437888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.438082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.438114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.438374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.438408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.438549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.438581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.438860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.438892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.439146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.439193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.439446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.439478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.439773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.439805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.440097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.440128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.440390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.440423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.440643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.440676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.440947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.440978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.441275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.441309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.441490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.441522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.441787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.846 [2024-12-10 05:53:46.441819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.846 qpair failed and we were unable to recover it. 00:28:58.846 [2024-12-10 05:53:46.442019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.442051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.442234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.442268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.442510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.442542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.442793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.442824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.442980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.443013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.443285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.443318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.443515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.443547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.443725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.443757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.444038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.444070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.444326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.444360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.444659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.444691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.444900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.444932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.445140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.445184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.445387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.445419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.445676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.445707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.446001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.446033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.446309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.446344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.446623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.446655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.446864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.446897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.447177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.447211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.447333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.447365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.447569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.447601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.447875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.447907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.448087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.448119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.448309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.448342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.448522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.448554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.448760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.448792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.449064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.449096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.449397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.449429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.449698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.449730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.449913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.449945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.450244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.450283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.450535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.450566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.450790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.450822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.451078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.451109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.451411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.451445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.451627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.451660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.451838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.451870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.452154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.452198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.452412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.847 [2024-12-10 05:53:46.452444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.847 qpair failed and we were unable to recover it. 00:28:58.847 [2024-12-10 05:53:46.452626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.452658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.452929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.452961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.453256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.453289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.453565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.453596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.453889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.453922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.454127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.454160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.454392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.454425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.454726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.454758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.454937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.454968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.455269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.455302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.455551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.455583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.455854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.455886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.456097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.456129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.456331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.456364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.456542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.456573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.456844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.456876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.457077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.457109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.457440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.457472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.457730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.457768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.458071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.458103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.458365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.458397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.458697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.458729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.459007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.459039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.459254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.459287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.459432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.459463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.459771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.459803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.460082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.460113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.460402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.460434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.460714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.460745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.461021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.461052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.461253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.461286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.461486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.461518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.461793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.461827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.461978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.462010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.848 qpair failed and we were unable to recover it. 00:28:58.848 [2024-12-10 05:53:46.462191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.848 [2024-12-10 05:53:46.462224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.462518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.462552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.462755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.462787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.463087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.463119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.463416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.463450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.463607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.463640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.463820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.463852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.464017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.464049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.464343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.464378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.464659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.464691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.464973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.465006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.465326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.465360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.465560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.465593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.465874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.465909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.466218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.466252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.466397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.466429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.466624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.466656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.466789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.466820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.467096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.467128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.467333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.467366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.467488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.467520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.467720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.467752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.467952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.467983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.468233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.468266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.468411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.468443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.468665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.468703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.468836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.468868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.469074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.469106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.469414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.469448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.469729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.469760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.470034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.470066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.470202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.470236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.470510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.470542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.470721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.470754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.470968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.471000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.471201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.471233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.471489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.471521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.471651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.471683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.471894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.471926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.472233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.849 [2024-12-10 05:53:46.472268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.849 qpair failed and we were unable to recover it. 00:28:58.849 [2024-12-10 05:53:46.472551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.472584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.472783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.472814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.473062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.473094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.473370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.473403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.473601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.473632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.473905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.473937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.474200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.474233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.474442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.474474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.474753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.474785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.474967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.474999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.475269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.475303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.475510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.475542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.475722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.475753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.476047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.476079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.476293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.476326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.476580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.476611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.476869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.476901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.477120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.477152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.477359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.477392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.477663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.477695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.477968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.477999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.478248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.478281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.478550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.478582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.478798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.478830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.479083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.479114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.479255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.479288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.479569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.479601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.479806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.479838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.480036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.480068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.480360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.480394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.480690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.480721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.480929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.480960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.481233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.481266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.481516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.481547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.481683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.481715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.482005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.482037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.482225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.482258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.482459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.482492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.482763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.850 [2024-12-10 05:53:46.482794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.850 qpair failed and we were unable to recover it. 00:28:58.850 [2024-12-10 05:53:46.483109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.483141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.483426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.483460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.483608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.483639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.483887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.483919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.484193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.484227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.484432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.484464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.484664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.484696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.484879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.484911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.485186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.485220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.485420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.485452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.485686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.485720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.485999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.486030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.486288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.486321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.486517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.486549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.486846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.486889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.487175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.487209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.487409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.487441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.487716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.487748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.488025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.488057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.488346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.488379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.488646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.488678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.488814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.488846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.488975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.489007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.489224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.489256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.489403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.489435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.489627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.489658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.489878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.489910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.490105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.490137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.490356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.490390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.490661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.490693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.490877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.490909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.491188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.491221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.491436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.491468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.491606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.491637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.491924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.491957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.492155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.492200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.492403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.492434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.492617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.492649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.492920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.492951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.851 [2024-12-10 05:53:46.493127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.851 [2024-12-10 05:53:46.493159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.851 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.493369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.493401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.493672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.493704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.493992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.494025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.494294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.494328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.494508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.494540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.494838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.494870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.495144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.495185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.495465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.495497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.495722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.495753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.496003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.496035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.496313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.496346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.496624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.496656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.496853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.496885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.497141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.497181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.497388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.497420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.497625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.497663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.497816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.497848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.498047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.498080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.498390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.498424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.498681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.498713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.498995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.499027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.499311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.499344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.499574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.499606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.499875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.499907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.500177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.500209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.500492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.500524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.500814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.500846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.501121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.501152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.501379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.501411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.501599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.501632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.501763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.501795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.502065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.502097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.502366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.502399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.502694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.502726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.502995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.503027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.503330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.503363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.852 [2024-12-10 05:53:46.503586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.852 [2024-12-10 05:53:46.503618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.852 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.503886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.503918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.504158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.504200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.504383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.504415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.504609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.504642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.504872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.504904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.505183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.505223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.505523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.505556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.505831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.505862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.506114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.506146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.506462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.506495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.506793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.506825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.507025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.507058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.507261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.507295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.507567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.507599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.507885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.507917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.508177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.508211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.508478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.508510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.508781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.508813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.509063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.509096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.509364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.509398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.509679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.509710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.509917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.509949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.510258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.510291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.510541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.510573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.510854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.510885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.511161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.511207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.511488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.511521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.511793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.511824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.512113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.512145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.512297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.512330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.512622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.512654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.512926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.512958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.513236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.513270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.853 [2024-12-10 05:53:46.513457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.853 [2024-12-10 05:53:46.513489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.853 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.513738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.513770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.513962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.513993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.514273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.514307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.514637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.514669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.514875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.514908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.515109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.515140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.515339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.515371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.515583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.515616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.515890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.515921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.516047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.516079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.516354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.516387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.516641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.516672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.516948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.516985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.517188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.517223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.517501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.517533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.517735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.517767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.517985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.518017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.518315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.518348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.518597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.518629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.518899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.518931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.519228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.519261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.519563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.519595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.519798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.519829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.520026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.520058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.520329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.520362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.520641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.520673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.520961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.520994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.521268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.521301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.521571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.521603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.521887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.521919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.522117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.522148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.522368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.522401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.522617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.522649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.522838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.522869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.523049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.523081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.523214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.523248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.523526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.523557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.523738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.523770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.523969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.524001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.854 qpair failed and we were unable to recover it. 00:28:58.854 [2024-12-10 05:53:46.524252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.854 [2024-12-10 05:53:46.524291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.524554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.524587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.524839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.524871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.525081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.525113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.525313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.525347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.525460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.525491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.525685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.525717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.525982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.526014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.526311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.526345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.526545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.526577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.526857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.526888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.527137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.527181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.527371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.527404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.527621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.527652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.527959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.527992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.528208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.528241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.528510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.528542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.528689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.528720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.528903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.528935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.529163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.529204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.529502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.529534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.529745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.529777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.530029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.530060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.530360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.530394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.530646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.530679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.530882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.530914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.531124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.531156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.531468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.531501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.531783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.531815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.532062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.532094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.532380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.532414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.532690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.532721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.533008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.533040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.533328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.533361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.533551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.533583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.533786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.533818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.533995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.534027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.534220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.534254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.534536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.534568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.534823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.855 [2024-12-10 05:53:46.534854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.855 qpair failed and we were unable to recover it. 00:28:58.855 [2024-12-10 05:53:46.535151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.535194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.535450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.535488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.535768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.535800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.536001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.536033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.536333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.536366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.536497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.536528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.536663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.536694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.536894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.536925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.537200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.537233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.537514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.537547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.537834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.537867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.538094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.538125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.538421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.538455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.538590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.538620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.538896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.538928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.539131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.539164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.539380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.539412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.539655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.539686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.539986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.540018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.540314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.540348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.540620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.540653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.540848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.540880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.541162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.541204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.541485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.541516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.541739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.541771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.542044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.542076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.542270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.542302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.542587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.542620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.542865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.542907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.543130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.543162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.543317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.543350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.543600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.543632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.543902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.543934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.544149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.544193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.544390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.544422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.544618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.544651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.544901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.544933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.545059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.545090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.545321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.545355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.856 [2024-12-10 05:53:46.545555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.856 [2024-12-10 05:53:46.545587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.856 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.545789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.545821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.546073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.546104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.546394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.546428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.546689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.546720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.547022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.547056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.547327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.547360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.547639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.547672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.547960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.547993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.548132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.548163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.548424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.548457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.548607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.548639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.548893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.548926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.549191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.549223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.549476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.549508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.549812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.549844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.550109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.550142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.550437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.550470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.550694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.550728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.550981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.551013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.551312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.551345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.551560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.551593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.551810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.551842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.552149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.552191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.552399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.552431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.552639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.552671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.552940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.552972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.553270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.553304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.553575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.553608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.553852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.553884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.554099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.554137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.554352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.554385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.554682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.554715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.554966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.554998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.555289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.857 [2024-12-10 05:53:46.555324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.857 qpair failed and we were unable to recover it. 00:28:58.857 [2024-12-10 05:53:46.555525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.555558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.555812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.555844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.556021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.556053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.556345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.556378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.556600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.556633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.556906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.556939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.557238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.557272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.557537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.557569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.557775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.557807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.558009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.558041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.558198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.558232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.558483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.558515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.558715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.558747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.559001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.559033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.559227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.559261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.559542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.559574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.559902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.559935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.560210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.560243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.560451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.560483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.560757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.560790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.561010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.561041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.561221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.561254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.561506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.561539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.561804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.561837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.562047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.562087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.562284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.562317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.562537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.562570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.562874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.562906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.563179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.563213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.563488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.563520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.563769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.563801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.564006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.564038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.564313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.564346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.564599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.564632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.564841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.564874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.565150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.565195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.565479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.565512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.858 qpair failed and we were unable to recover it. 00:28:58.858 [2024-12-10 05:53:46.565715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.858 [2024-12-10 05:53:46.565746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.565856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.565888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.566145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.566189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.566336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.566367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.566505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.566537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.566826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.566859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.567068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.567101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.567269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.567303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.567505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.567537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.567732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.567765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.568045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.568077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.568260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.568293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.568519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.568551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.568746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.568778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.569096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.569129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.569371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.569406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.569679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.569712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.569926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.569959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.570231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.570264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.570381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.570413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.570688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.570721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.570977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.571009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.571210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.571243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.571439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.571470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.571674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.571708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.571989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.572021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.572319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.572359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.572556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.572594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.572841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.572872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.573055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.573088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.573271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.573305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.573585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.573616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.573800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.573832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.574099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.574131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.574291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.574329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.574610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.574643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.574871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.574903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.575100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.575133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.575322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.575356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.575568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.859 [2024-12-10 05:53:46.575601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.859 qpair failed and we were unable to recover it. 00:28:58.859 [2024-12-10 05:53:46.575810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.575841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.576088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.576120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.576343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.576377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.576579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.576611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.576771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.576802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.577081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.577113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.577331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.577365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.577514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.577547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.577752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.577784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.577976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.578008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.578203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.578237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.578372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.578405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.578608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.578639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.578968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.579000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.579154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.579197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.579399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.579432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.579633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.579665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.579906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.579937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.580128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.580160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.580372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.580405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.580657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.580689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.580954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.580986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.581239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.581272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.581476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.581508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.581661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.581694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.581991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.582023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.582219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.582253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.582310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dd0f0 (9): Bad file descriptor 00:28:58.860 [2024-12-10 05:53:46.582621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.582698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.582921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.582956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.583214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.583251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.583411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.583445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.583695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.583728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.583868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.583900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.584185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.584220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.584379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.584411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.584627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.584660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.584893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.584927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.585208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.585242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.585505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.860 [2024-12-10 05:53:46.585538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.860 qpair failed and we were unable to recover it. 00:28:58.860 [2024-12-10 05:53:46.585731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.585763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.586028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.586060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.586264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.586297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.586567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.586599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.586759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.586791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.587042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.587073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.587283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.587317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.587544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.587575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.587831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.587863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.588122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.588155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.588318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.588352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.588542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.588574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.588780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.588812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.589001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.589033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.589181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.589221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.589410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.589443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.589576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.589608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.589805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.589836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.590038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.590070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.590310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.590343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.590617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.590650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.590955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.590987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.591252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.591287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.591421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.591453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.591706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.591738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.591928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.591960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.592164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.592205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.592388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.592420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.592748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.592781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.593057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.593090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.593371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.593404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.593670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.593702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.593999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.594032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.594217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.594250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.594455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.594487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.594665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.594697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.595039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.595071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.595211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.595245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.595449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.861 [2024-12-10 05:53:46.595481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.861 qpair failed and we were unable to recover it. 00:28:58.861 [2024-12-10 05:53:46.595757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.595790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.596001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.596034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.596364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.596441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.596726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.596764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.597071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.597106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.597365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.597401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.597626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.597661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.597927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.597973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.598196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.598229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.598434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.598467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.598700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.598733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.598871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.598924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.599190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.599224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.599442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.599476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.599768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.599803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.600032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.600077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.600268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.600303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.600509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.600541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.600817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.600850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.601079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.601113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.601457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.601490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.601717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.601750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.602041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.602077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.602351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.602384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.602541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.602573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.602871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.602903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.603192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.603224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.603396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.603430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.603637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.603670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.604011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.604045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.604263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.604300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.604466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.604501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.604755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.604787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.604980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.862 [2024-12-10 05:53:46.605019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.862 qpair failed and we were unable to recover it. 00:28:58.862 [2024-12-10 05:53:46.605291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.605327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.605522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.605554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.605751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.605786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.606037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.606075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.606257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.606293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.606499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.606534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.606790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.606822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.606943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.606976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.607293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.607329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.607538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.607571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.607772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.607805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.608064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.608103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.608247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.608281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.608481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.608513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.608664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.608697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.608914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.608946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.609139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.609187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.609328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.609360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.609654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.609686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.609886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.609919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.610113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.610146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.610415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.610456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.610733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.610766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.611047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.611079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.611302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.611337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.611461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.611494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.611716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.611749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.611976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.612010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.612264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.612304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.612450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.612483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.612704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.612739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.613031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.613064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.613293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.613327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.613583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.613615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.613875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.613907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.614148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.614192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.614424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.614455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.614686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.614721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.614850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.863 [2024-12-10 05:53:46.614884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.863 qpair failed and we were unable to recover it. 00:28:58.863 [2024-12-10 05:53:46.615149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.615194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.615414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.615447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.615650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.615683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.615887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.615919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.616121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.616154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.616305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.616339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.616556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.616601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.616918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.616952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.617251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.617299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.617611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.617688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.617942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.617980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.618239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.618276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.618459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.618491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.618702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.618734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.618993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.619025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.619298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.619331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.619558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.619589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.619791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.619825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.620096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.620127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.620360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.620395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.620599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.620632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.620767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.620798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.620992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.621034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.621299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.621333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.621534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.621566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.621910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.621943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.622220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.622255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.622482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.622512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.622792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.622824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.623028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.623060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.623292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.623326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.623458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.623489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.623701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.623732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.623926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.623958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.624208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.624242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.624559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.624590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.864 qpair failed and we were unable to recover it. 00:28:58.864 [2024-12-10 05:53:46.624895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-12-10 05:53:46.624927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.625070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.625101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.625328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.625360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.625636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.625668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.625820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.625851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.626117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.626148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.626345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.626377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.626580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.626612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.626904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.626935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.627157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.627206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.627462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.627493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.627636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.627667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.627894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.627926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.628119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.628151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.628357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.628390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.628554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.628586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.628886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.628918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.629114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.629145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.629280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.629312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.629519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.629551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.629874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.629906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.630159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.630202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.630408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.630440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.630643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.630675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.630927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.630958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.631150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.631191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.631396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.631434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.631691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.631723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.631975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.632007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.632198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.632232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.632369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.632401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.632550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.632582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.632711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.632743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.632878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.632909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.633093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.633124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.633410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.633443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.633644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.633675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.633795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.633827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.634076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.634108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.865 qpair failed and we were unable to recover it. 00:28:58.865 [2024-12-10 05:53:46.634336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-12-10 05:53:46.634368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.634532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.634564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.634709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.634741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.634959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.634991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.635102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.635134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.635334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.635367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.635572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.635604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.635829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.635860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.635995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.636027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.636294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.636327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.636478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.636510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.636714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.636745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.636999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.637031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.637322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.637355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.637615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.637693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.637980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.638018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.638322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.638360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.638586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.638621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.638888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.638920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.639125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.639157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.639376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.639409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.639553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.639585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.639725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.639757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.639887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.639920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.640158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.640200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.640346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.640377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.640584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.640615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.640885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.640918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.641186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.641220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.641419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.641452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.641588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.641620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.641842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.641875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.642066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.642098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.642400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.642434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.642659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-12-10 05:53:46.642692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.866 qpair failed and we were unable to recover it. 00:28:58.866 [2024-12-10 05:53:46.642840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.642874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.643085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.643118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.643344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.643377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.643583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.643616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.643752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.643784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.644047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.644079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.644275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.644316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.644462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.644493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.644696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.644729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.644937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.644970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.645154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.645196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.645340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.645372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.645523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.645557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.645829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.645863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.646055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.646087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.646356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.646390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.646611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.646643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.646871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.646903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.647183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.647216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.647494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.647526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.647737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.647770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.648048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.648080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.648299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.648333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.648555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.648587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.648813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.648843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.648969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.649001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.649253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.649287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.649579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.649612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.649846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.649878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.650020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.650052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.650280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.650313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.650500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.650531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.650737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.650769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.651021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.651052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.651247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.651283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.651494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.651527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.651662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.651694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.651910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.651943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.652182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.652215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.867 qpair failed and we were unable to recover it. 00:28:58.867 [2024-12-10 05:53:46.652398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.867 [2024-12-10 05:53:46.652430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.652628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.652660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.652906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.652938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.653134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.653172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.653335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.653368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.653580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.653611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.653813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.653845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.654120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.654152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.654359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.654397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.654551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.654583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.654721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.654753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.654953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.654984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.655129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.655161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.655376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.655410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.655608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.655639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.655974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.656007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.656136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.656177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.656382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.656414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.656685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.656717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.656987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.657020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.657147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.657192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.657394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.657427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.657576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.657608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.657812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.657845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.658164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.658207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.658393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.658425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.658569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.658601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.658854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.658886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.659086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.659118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.659345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.659380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.659563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.659596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.659738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.659770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.659924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.659956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.660244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.660277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.660529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.660560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.660883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.660921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.661205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.661239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.661535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.661568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.661878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.661910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.662064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.868 [2024-12-10 05:53:46.662095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.868 qpair failed and we were unable to recover it. 00:28:58.868 [2024-12-10 05:53:46.662264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.662297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.662452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.662484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.662635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.662667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.662947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.662979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.663158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.663199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.663353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.663385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.663536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.663568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.663698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.663730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.664042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.664080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.664381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.664414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.664614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.664646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.664920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.664952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.665208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.665242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.665394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.665427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.665706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.665738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.665933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.665965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.666122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.666155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.666456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.666489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.666765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.666797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.666994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.667025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.667285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.667319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.667594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.667626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.667858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.667890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.668134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.668174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.668375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.668407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.668705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.668737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.669018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.669050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.669251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.669285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.669537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.669570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.669753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.669785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.670062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.670094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.670302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.670335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.670573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.670605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.670768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.670800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.671049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.671081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.671296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.671329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.671582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.671620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.671859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.671892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.672142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.869 [2024-12-10 05:53:46.672198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.869 qpair failed and we were unable to recover it. 00:28:58.869 [2024-12-10 05:53:46.672381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.672413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.672570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.672603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.672726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.672758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.673024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.673056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.673253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.673287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.673541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.673573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.673844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.673876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.674002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.674035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.674293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.674326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.674483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.674515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.674726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.674759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.674973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.675006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.675238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.675272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.675473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.675506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.675727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.675759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.675905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.675937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.676211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.676245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.676393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.676425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.676721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.676753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.677005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.677036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.677319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.677353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.677495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.677527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.677781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.677812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.678012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.678044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.678239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.678279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.678473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.678504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.678613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.678645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.678898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.678930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.679138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.679180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.679435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.679468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.679675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.679707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.679936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.679968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.680153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.680193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.680350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.680383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.680564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.680595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.680813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.680847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.681027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.681060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.681275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.681308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.681587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.870 [2024-12-10 05:53:46.681620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.870 qpair failed and we were unable to recover it. 00:28:58.870 [2024-12-10 05:53:46.681771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.681804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.682009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.682041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.682274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.682308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.682431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.682463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.682619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.682651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.682884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.682916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.683212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.683247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.683396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.683427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.683644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.683675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.683977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.684009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.684307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.684341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.684535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.684567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.684847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.684879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.685089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.685121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.685279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.685312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.685517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.685549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.685683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.685716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.685972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.686005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.686212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.686246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.686429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.686461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.686658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.686690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.686993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.687026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.687228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.687262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.687395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.687427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.687684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.687717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.687942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.687974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.688228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.688267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.688422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.688454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.688647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.688681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.688882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.688915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.689097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.689129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.689359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.689393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.689526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.689558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.871 [2024-12-10 05:53:46.689810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.871 [2024-12-10 05:53:46.689842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.871 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.690035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.690067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.690228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.690262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.690463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.690495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.690697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.690729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.690983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.691016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.691203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.691236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.691424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.691457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.691649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.691682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.691959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.691991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.692255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.692288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.692445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.692478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.692621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.692654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.692794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.692826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.693012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.693045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.693302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.693335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.693494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.693526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.693718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.693750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.693879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.693911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.694047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.694079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.694349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.694388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.694519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.694551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.694846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.694879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.695138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.695181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.695318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.695350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.695476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.695508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.695714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.695746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.695946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.695978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.696211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.696244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.696399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.696433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.696579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.696611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.696743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.696775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.697054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.697086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.697274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.697307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.697565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.697597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.697815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.697847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.698107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.698139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.698288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.698321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.698544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.698576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.698703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.872 [2024-12-10 05:53:46.698735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.872 qpair failed and we were unable to recover it. 00:28:58.872 [2024-12-10 05:53:46.699035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.699068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.699220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.699253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.699435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.699467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.699671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.699703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.699964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.699995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.700251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.700284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.700488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.700521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.700663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.700695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.700835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.700868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.701050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.701083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.701289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.701322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.701519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.701551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.701730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.701763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.702054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.702085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.702329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.702362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.702501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.702533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.702814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.702847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.703105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.703138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.703287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.703320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.703516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.703548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.703773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.703805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.703998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.704036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.704293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.704327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.704535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.704567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.704857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.704889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.705096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.705128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.705335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.705367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.705586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.705618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.705859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.705890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.706199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.706233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.706510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.706542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.706652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.706683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.706803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.706835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.707054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.707086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.707370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.707402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.707683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.707716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.707924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.707957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.708209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.708243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.708461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.708493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.708672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.873 [2024-12-10 05:53:46.708704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.873 qpair failed and we were unable to recover it. 00:28:58.873 [2024-12-10 05:53:46.708922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.708953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.709153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.709193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.709468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.709501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.709694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.709725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.710019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.710052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.710323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.710356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.710549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.710580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.710727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.710760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.711071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.711102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.711337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.711372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.711504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.711536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.711691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.711724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.712022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.712054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.712300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.712334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.712459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.712492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.712692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.712724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.713006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.713038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.713321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.713355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.713550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.713582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.713712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.713744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.713993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.714025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.714322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.714354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.714622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.714655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.714920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.714952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.715162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.715203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.715404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.715436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.715663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.715694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.715898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.715930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.716207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.716242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.716448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.716481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.716702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.716734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.716999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.717032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.717188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.717221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.717449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.717481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:58.874 [2024-12-10 05:53:46.717684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.874 [2024-12-10 05:53:46.717717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:58.874 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.718047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.718080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.718376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.718409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.718615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.718648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.718939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.718970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.719270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.719303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.719433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.719466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.719665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.719697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.719853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.719886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.720021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.720054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.720249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.720282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.720496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.720528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.720666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.720698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.720960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.720992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.721182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.721215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.721419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.721457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.721640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.721673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.721992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.722025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.722261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.722295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.722492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.722525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.722725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.722757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.723004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.723036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.723241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.723274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.723524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.723557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.723710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.723742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.723942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.723974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.724310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.724344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.724598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.724630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.724846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.724879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.725063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.725096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.725302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.725336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.725550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.725583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.725788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.725820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.726033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.726065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.726397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.726434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.726717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.726748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.726930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.726962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.727162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.727202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.727403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.727436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.727591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.727623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.727769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.727800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.728054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.728087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.728285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.728319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.728542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.728574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.728875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.728907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.729088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.729121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.729372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.729406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.729660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.729692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.729927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.729960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.730156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.730200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.156 qpair failed and we were unable to recover it. 00:28:59.156 [2024-12-10 05:53:46.730409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.156 [2024-12-10 05:53:46.730441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.730638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.730670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.730880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.730912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.731112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.731144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.731267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.731299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.731527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.731559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.731686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.731723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.732001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.732034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.732315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.732348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.732660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.732692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.732839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.732871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.733144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.733184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.733316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.733348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.733549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.733582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.733836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.733868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.734062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.734095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.734324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.734358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.734549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.734581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.734787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.734819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.735025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.735057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.735188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.735223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.735448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.735481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.735677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.735709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.735916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.735949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.736244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.736278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.736418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.736450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.736639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.736670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.736829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.736860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.737053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.737086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.737299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.737333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.737460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.737492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.737793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.737825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.738015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.738047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.738347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.738386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.738574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.738607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.738829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.738861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.739057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.739089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.739389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.739423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.739629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.739662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.739911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.739943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.740163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.740207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.740409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.740442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.740693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.740724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.740984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.741015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.741342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.741377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.741508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.741540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.741649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.741681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.741946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.741977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.742184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.742217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.742345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.742378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.742500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.742532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.742741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.742773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.743045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.743077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.157 [2024-12-10 05:53:46.743262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.157 [2024-12-10 05:53:46.743295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.157 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.743564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.743596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.743755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.743786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.744047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.744079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.744352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.744386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.744517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.744549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.744678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.744710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.744985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.745017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.745296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.745331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.745466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.745498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.745649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.745680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.745984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.746015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.746232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.746265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.746417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.746449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.746605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.746636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.746837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.746869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.747157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.747198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.747404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.747435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.747660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.747692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.747987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.748020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.748231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.748265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.748402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.748440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.748598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.748630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.748889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.748920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.749070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.749102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.749322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.749355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.749555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.749588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.749782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.749814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.750084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.750117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.750317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.750350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.750490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.750522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.750683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.750716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.750927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.750959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.751237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.751272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.751474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.751506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.751790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.751823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.752007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.752039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.752232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.752266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.752515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.752548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.752786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.752818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.752945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.752978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.753107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.753139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.753401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.753434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.753697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.753729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.754028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.754060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.754360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.754393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.754581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.754614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.754801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.754833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.755059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.755097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.755308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.755342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.755626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.755659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.755954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.755986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.756257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.756290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.756426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.756458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.756660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.756692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.756946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.158 [2024-12-10 05:53:46.756979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.158 qpair failed and we were unable to recover it. 00:28:59.158 [2024-12-10 05:53:46.757251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.757284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.757406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.757438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.757580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.757612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.757943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.757974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.758182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.758215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.758425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.758457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.758652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.758685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.759014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.759047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.759251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.759286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.759441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.759473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.759605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.759637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.759918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.759950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.760229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.760263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.760399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.760431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.760710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.760742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.760919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.760951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.761224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.761257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.761447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.761480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.761683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.761715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.762003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.762035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.762325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.762359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.762560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.762593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.762814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.762847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.762977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.763009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.763238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.763272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.763476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.763508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.763754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.763786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.763982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.764015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.764314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.764347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.764613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.764645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.764935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.764968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.765272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.765306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.765459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.765490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.765707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.765745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.766061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.766093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.766277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.766310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.766577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.766610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.766832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.766864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.767049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.767082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.767372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.767406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.767642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.767674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.768000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.768032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.768343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.768378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.768577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.768609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.768790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.768823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.769025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.769056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.769324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.769358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.769512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.769545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.769796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.769828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.159 qpair failed and we were unable to recover it. 00:28:59.159 [2024-12-10 05:53:46.770098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.159 [2024-12-10 05:53:46.770130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.770322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.770355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.770549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.770582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.770728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.770760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.771017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.771050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.771316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.771350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.771488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.771519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.771769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.771802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.772052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.772084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.772357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.772391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.772545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.772578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.772686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.772724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.772973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.773006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.773271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.773304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.773502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.773533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.773694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.773726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.773937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.773970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.774108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.774140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.774378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.774411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.774619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.774651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.774922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.774953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.775078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.775110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.775337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.775371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.775552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.775584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.775771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.775803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.776107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.776140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.776313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.776346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.776544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.776576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.776835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.776867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.777047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.777080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.777287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.777321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.777598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.777631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.777764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.777795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.778068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.778100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.778396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.778431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.778626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.778658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.778808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.778840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.779118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.779149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.779379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.779412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.779598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.779631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.779832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.779864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.780113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.780146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.780324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.780357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.780608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.780640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.780906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.780939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.781131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.781163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.781377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.781410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.781556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.781587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.781784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.781816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.782012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.782046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.782270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.782305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.782427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.782460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.782659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.782698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.782925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.160 [2024-12-10 05:53:46.782958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.160 qpair failed and we were unable to recover it. 00:28:59.160 [2024-12-10 05:53:46.783176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.783210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.783336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.783368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.783512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.783545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.783739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.783772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.783975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.784006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.784218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.784252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.784458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.784490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.784698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.784731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.785054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.785085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.785344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.785378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.785588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.785620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.785823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.785855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.785994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.786027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.786296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.786329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.786526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.786560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.786714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.786746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.787022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.787054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.787263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.787297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.787448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.787483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.787763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.787795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.788073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.788108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.788342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.788377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.788625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.788657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.788791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.788825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.789090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.789122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.789305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.789339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.789498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.789531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.789724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.789756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.790030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.790063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.790339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.790374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.790525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.790557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.790768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.790800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.791094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.791128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.791349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.791384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.791531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.791563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.791754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.791786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.791987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.792021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.792288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.792322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.792505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.792538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.792742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.792822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.793104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.793141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.793372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.793407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.793564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.793597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.793778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.793810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.794012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.794045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.794305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.794341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.794489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.794521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.794716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.794749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.794894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.794927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.795124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.795157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.795306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.795339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.795540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.795573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.795879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.795921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.796105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.796138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.796390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.796423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.796575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.161 [2024-12-10 05:53:46.796608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.161 qpair failed and we were unable to recover it. 00:28:59.161 [2024-12-10 05:53:46.796839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.796872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.797060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.797092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.797216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.797250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.797405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.797438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.797582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.797613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.797754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.797787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.798083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.798114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.798407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.798441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.798624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.798656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.798952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.798985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.799190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.799224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.799426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.799459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.799674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.799705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.800011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.800042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.800313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.800347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.800489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.800521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.800669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.800701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.800931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.800963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.801174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.801208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.801468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.801500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.801626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.801658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.801777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.801809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.802008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.802039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.802349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.802383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.802587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.802619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.802767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.802799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.802996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.803027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.803294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.803328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.803464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.803496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.803720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.803751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.803965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.803996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.804249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.804281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.804496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.804528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.804663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.804694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.804994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.805026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.805221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.805255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.805442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.805473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.805686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.805718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.805972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.806004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.806229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.806262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.806460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.806491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.806790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.806822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.807016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.807048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.807245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.807279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.807429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.807461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.807710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.807743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.807875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.807906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.808104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.808140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.808399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.162 [2024-12-10 05:53:46.808433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.162 qpair failed and we were unable to recover it. 00:28:59.162 [2024-12-10 05:53:46.808707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.808740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.808949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.808981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.809200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.809236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.809446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.809480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.809612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.809643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.809858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.809890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.810037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.810069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.810339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.810372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.810621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.810653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.810869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.810901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.811078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.811110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.811411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.811445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.811712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.811744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.811875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.811908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.812160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.812225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.812532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.812565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.812679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.812711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.812977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.813010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.813305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.813340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.813573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.813605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.813874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.813907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.814197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.814230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.814383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.814414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.814668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.814700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.815034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.815065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.815206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.815239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.815440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.815473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.815676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.815707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.815867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.815900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.816183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.816216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.816495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.816528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.816832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.816865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.817127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.817160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.817330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.817363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.817561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.817593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.817920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.817953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.818229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.818262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.818457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.818489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.818637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.818669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.818895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.818927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.819187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.819221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.819424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.819456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.819651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.819683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.819811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.819843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.820105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.820137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.820328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.820362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.820553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.820588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.820725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.820758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.820947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.820980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.821270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.821303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.821492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.821525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.821738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.821770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.821971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.822002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.822324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.822358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.822584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.163 [2024-12-10 05:53:46.822622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.163 qpair failed and we were unable to recover it. 00:28:59.163 [2024-12-10 05:53:46.822764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.822797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.822935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.822967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.823245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.823278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.823484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.823518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.823806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.823841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.824120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.824153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.824322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.824354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.824507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.824539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.824697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.824730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.824914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.824946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.825150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.825191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.825326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.825359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.825576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.825608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.825764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.825797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.826006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.826039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.826274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.826307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.826432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.826463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.826676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.826709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.826854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.826886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.827033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.827069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.827255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.827288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.827525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.827557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.827701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.827733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.828005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.828038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.828236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.828270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.828465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.828501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.828718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.828752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.829002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.829034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.829218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.829252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.829412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.829444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.829694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.829726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.830000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.830036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.830304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.830339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.830568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.830600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.830856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.830888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.831071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.831104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.831318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.831352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.831511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.831542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.831701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.831733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.832008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.832046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.832256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.832290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.832453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.832485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.832619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.832651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.832869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.832901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.833101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.833133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.833406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.833441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.833667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.833699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.833886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.833917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.834139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.834196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.834406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.834438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.834631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.834664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.834816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.834847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.164 [2024-12-10 05:53:46.835098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.164 [2024-12-10 05:53:46.835131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.164 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.835410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.835444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.835590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.835622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.835774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.835807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.836037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.836069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.836209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.836242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.836442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.836475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.836627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.836659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.836925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.836956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.837101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.837132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.837340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.837373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.837504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.837536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.837738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.837771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.837912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.837944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.838155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.838196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.838342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.838375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.838526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.838558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.838756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.838788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.838947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.838980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.839217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.839251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.839457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.839489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.839632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.839664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.839893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.839925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.840204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.840238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.840390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.840422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.840622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.840654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.840877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.840909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.841102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.841143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.841297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.841330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.841527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.841559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.841694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.841727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.841924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.841955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.842187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.842220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.842407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.842439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.842660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.842692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.842910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.842944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.843198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.843233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.843387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.843419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.843563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.843595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.843729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.843761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.843999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.844031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.844292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.844326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.844510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.844542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.844746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.844778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.845035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.845068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.845278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.845311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.845531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.845564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.845687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.845718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.846012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.846044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.846237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.165 [2024-12-10 05:53:46.846271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.165 qpair failed and we were unable to recover it. 00:28:59.165 [2024-12-10 05:53:46.846428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.846460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.846680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.846712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.846932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.846963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.847243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.847276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.847492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.847525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.847707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.847739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.848027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.848060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.848287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.848321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.848520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.848552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.848734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.848766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.848969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.849001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.849257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.849292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.849572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.849605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.849791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.849823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.850024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.850056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.850262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.850296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.850446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.850479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.850671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.850709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.850970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.851003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.851203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.851236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.851436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.851469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.851674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.851708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.851913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.851944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.852191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.852225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.852479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.852511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.852642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.852674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.852950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.852982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.853189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.853222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.853499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.853532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.853656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.853688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.853983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.854016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.854214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.854248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.854481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.854513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.854645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.854677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.854820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.854852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.855114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.855147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.855302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.855334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.855466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.855498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.855748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.855782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.856005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.856037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.856157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.856198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.856336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.856368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.856584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.856616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.856898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.856929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.857048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.857081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.857288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.857322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.857481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.857513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.857644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.857676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.857953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.857985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.858271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.858303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.858462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.858493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.858636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.858668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.858898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.858929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.859049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.859081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.859226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.859259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.166 [2024-12-10 05:53:46.859462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.166 [2024-12-10 05:53:46.859494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.166 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.859630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.859664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.859898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.859937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.860212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.860246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.860392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.860424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.860645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.860676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.860823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.860854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.861056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.861087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.861222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.861256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.861448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.861480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.861674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.861706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.861839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.861871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.862084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.862115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.862285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.862317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.862524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.862556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.862708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.862740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.862938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.862970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.863199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.863234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.863458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.863490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.863675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.863706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.863975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.864007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.864315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.864348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.864483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.864515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.864700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.864732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.864941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.864973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.865264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.865297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.865412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.865444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.865661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.865692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.865843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.865874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.866076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.866109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.866251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.866285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.866438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.866470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.866670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.866702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.866933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.866965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.867253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.867286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.867435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.867468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.867605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.867638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.867771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.867802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.867985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.868017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.868149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.868190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.868339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.868372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.868511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.868542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.868680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.868718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.868833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.868865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.869039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.869129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.869308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.869347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.869487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.869521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.869705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.869738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.869923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.869955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.870093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.870126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.870353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.870389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.870512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.870544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.870813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.870849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.871118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.871153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.871374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.871410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.167 [2024-12-10 05:53:46.871548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.167 [2024-12-10 05:53:46.871581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.167 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.871741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.871776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.871906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.871938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.872231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.872266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.872501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.872534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.872733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.872766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.873039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.873074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.873206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.873240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.873353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.873385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.873585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.873618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.873809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.873842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.874095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.874130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.874289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.874322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.874481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.874519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.874788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.874865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.875085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.875123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.875283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.875319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.875454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.875487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.875689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.875723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.875854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.875887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.876020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.876053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.876241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.876276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.876424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.876457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.876574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.876606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.876727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.876760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.876902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.876935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.877058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.877091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.877278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.877322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.877437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.877470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.877667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.877700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.877828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.877861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.878095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.878128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.878339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.878373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.878564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.878597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.878726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.878759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.878877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.878910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.879036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.879068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.879343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.879377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.879521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.879554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.879699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.879732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.879849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.879881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.880119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.880152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.880291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.880325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.880527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.880560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.880814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.880846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.881096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.881129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.881274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.881307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.881494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.881528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.881649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.881682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.881871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.881903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.882042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.168 [2024-12-10 05:53:46.882074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.168 qpair failed and we were unable to recover it. 00:28:59.168 [2024-12-10 05:53:46.882198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.882231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.882415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.882448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.882591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.882623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.882739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.882773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.882883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.882916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.883032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.883064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.883179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.883213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.883394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.883427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.883605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.883638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.883840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.883874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.884005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.884037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.884179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.884213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.884397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.884429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.884621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.884654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.884779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.884813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.884938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.884971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.885225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.885265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.885454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.885488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.885611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.885643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.885780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.885813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.886014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.886046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.886234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.886269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.886519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.886554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.886766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.886799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.887007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.887040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.887201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.887236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.887370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.887404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.887660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.887692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.887846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.887878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.888065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.888098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.888257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.888290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.888485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.888519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.888656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.888689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.888869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.888903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.889026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.889059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.889187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.889221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.889348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.889380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.889520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.889552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.889662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.889696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.889874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.889907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.890018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.890051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.890236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.890271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.890387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.890420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.890605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.890684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.890906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.890944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.891063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.891096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.891232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.891267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.891466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.891499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.891748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.891779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.891898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.891929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.892143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.892189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.892339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.892370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.892505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.169 [2024-12-10 05:53:46.892538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.169 qpair failed and we were unable to recover it. 00:28:59.169 [2024-12-10 05:53:46.892674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.892706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.892832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.892864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.892990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.893023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.893223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.893271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.893553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.893585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.893728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.893761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.893869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.893902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.894032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.894063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.894182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.894214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.894337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.894369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.894591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.894623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.894726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.894758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.894903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.894936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.895048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.895080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.895286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.895320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.895447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.895479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.895706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.895738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.895879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.895911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.896094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.896126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.896269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.896303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.896433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.896464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.896613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.896645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.896749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.896781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.897000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.897032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.897219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.897252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.897432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.897463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.897595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.897627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.897745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.897780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.897900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.897932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.898067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.898099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.898425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.898504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.898742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.898779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.898986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.899019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.899147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.899194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.899311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.899343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.899461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.899492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.899619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.899651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.899831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.899864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.899994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.900026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.900164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.900212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.900395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.900427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.900550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.900582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.900716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.900748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.900929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.900971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.901103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.901135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.901329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.901365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.901544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.901576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.901693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.901725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.901858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.901890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.902014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.902045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.902152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.902197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.902325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.902356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.902499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.902531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.902643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.902674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.902850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.902881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.170 qpair failed and we were unable to recover it. 00:28:59.170 [2024-12-10 05:53:46.903002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.170 [2024-12-10 05:53:46.903033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.903154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.903197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.903380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.903412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.903529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.903561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.903692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.903723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.903840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.903872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.904091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.904123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.904323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.904356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.904464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.904495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.904620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.904652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.904790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.904820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.904928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.904959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.905078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.905109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.905312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.905346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.905466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.905497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.905733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.905808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.905945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.905983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.906095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.906127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.906281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.906316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.906450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.906483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.906701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.906733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.906860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.906892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.907005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.907037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.907224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.907258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.907396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.907427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.907606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.907638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.907762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.907795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.907918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.907950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.908136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.908177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.908377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.908410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.908623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.908655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.908837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.908869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.909126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.909158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.909292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.909323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.909441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.909473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.909615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.909646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.909839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.909870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.910047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.910078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.910201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.910233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.910427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.910459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.910589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.910620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.910729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.910760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.910894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.910931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.911112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.911143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.911270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.911304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.911427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.911459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.911595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.911628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.911744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.911776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.911887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.911918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.912162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.912208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.912320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.912351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.912457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.912488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.912602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.912634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.912878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.912909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.913045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.913077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.913196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.913230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.171 qpair failed and we were unable to recover it. 00:28:59.171 [2024-12-10 05:53:46.913359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.171 [2024-12-10 05:53:46.913391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.913577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.913609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.913812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.913844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.914021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.914053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.914233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.914266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.914470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.914501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.914699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.914731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.914954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.914986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.915229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.915262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.915438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.915470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.915613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.915644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.915829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.915861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.915982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.916013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.916200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.916232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.916349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.916380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.916553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.916586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.916766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.916798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.916997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.917029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.917142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.917185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.917298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.917330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.917432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.917463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.917646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.917678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.917820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.917852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.918053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.918085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.918294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.918327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.918506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.918538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.918718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.918749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.918933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.918966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.919122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.919154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.919348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.919381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.919571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.919603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.919797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.919828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.919941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.919972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.920092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.920123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.920326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.920358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.920548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.920579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.920683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.920715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.920903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.920934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.921124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.921156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.921381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.921414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.921671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.921704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.921833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.921865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.921989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.922021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.922209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.922242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.922419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.922450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.922567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.922599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.922817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.922850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.923052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.923084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.923275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.923308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.923433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.923465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.923591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.923622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.172 [2024-12-10 05:53:46.923751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.172 [2024-12-10 05:53:46.923782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.172 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.923955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.923987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.924165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.924222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.924424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.924462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.924653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.924685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.924876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.924907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.925017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.925049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.925243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.925276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.925393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.925424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.925632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.925663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.925783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.925814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.926072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.926104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.926220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.926252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.926375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.926406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.926603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.926635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.926742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.926773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.926947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.926979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.927251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.927285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.927476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.927507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.927643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.927675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.927805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.927837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.927965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.927996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.928196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.928230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.928336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.928367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.928561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.928592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.928700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.928732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.928978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.929009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.929291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.929324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.929550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.929582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.929684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.929716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.929827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.929859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.929981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.930012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.930188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.930220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.930340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.930371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.930542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.930574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.930700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.930732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.930910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.930942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.931135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.931174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.931389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.931421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.931607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.931638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.931748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.931779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.932055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.932087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.932200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.932233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.932410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.932441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.932577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.932615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.932728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.932760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.932934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.932965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.933091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.933123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.933307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.933340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.933460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.933491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.933665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.933697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.933831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.933863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.933973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.934004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.934194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.934227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.934407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.934440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.934551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.934582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.173 qpair failed and we were unable to recover it. 00:28:59.173 [2024-12-10 05:53:46.934768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.173 [2024-12-10 05:53:46.934799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.934911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.934942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.935152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.935194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.935313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.935345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.935469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.935501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.935623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.935655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.935858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.935889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.935998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.936029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.936220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.936254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.936389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.936421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.936546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.936576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.936797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.936829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.936946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.936978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.937113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.937143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.937261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.937293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.937496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.937533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.937644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.937676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.937799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.937831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.937945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.937977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.938094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.938126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.938245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.938278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.938453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.938485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.938616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.938647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.938758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.938789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.938931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.938963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.939072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.939103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.939297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.939330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.939443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.939475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.939590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.939621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.939814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.939846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.940038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.940070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.940324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.940358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.940475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.940507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.940693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.940724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.940843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.940875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.941086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.941117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.941266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.941299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.941421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.941453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.941630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.941662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.941775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.941807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.942002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.942033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.942139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.942179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.942283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.942315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.942502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.942534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.942777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.942808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.942931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.942962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.943078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.943109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.943292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.943325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.943522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.943553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.943682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.943713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.943835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.943867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.943993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.944024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.944132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.174 [2024-12-10 05:53:46.944164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.174 qpair failed and we were unable to recover it. 00:28:59.174 [2024-12-10 05:53:46.944363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.944395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.944517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.944549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.944682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.944714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.944838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.944875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.944984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.945016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.945161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.945206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.945328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.945360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.945477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.945508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.945623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.945655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.945827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.945858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.945968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.945999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.946108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.946140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.946261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.946293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.946505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.946537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.946652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.946683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.946865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.946896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.947108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.947140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.947407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.947440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.947613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.947645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.947829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.947860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.947998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.948029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.948145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.948186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.948300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.948332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.948443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.948476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.948596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.948627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.948877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.948908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.949016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.949048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.949156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.949197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.949434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.949466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.949642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.949674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.949854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.949891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.950012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.950061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.950190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.950223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.950342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.950374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.950482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.950513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.950635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.950666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.950778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.950809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.950922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.950954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.951127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.951159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.951364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.951396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.951502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.951534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.951644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.951675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.951795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.951826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.952016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.952048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.952180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.952214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.952330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.952362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.952498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.952530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.952657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.952689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.952794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.952825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.952931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.952962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.175 qpair failed and we were unable to recover it. 00:28:59.175 [2024-12-10 05:53:46.953065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.175 [2024-12-10 05:53:46.953096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.953208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.953241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.953444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.953476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.953649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.953679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.953851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.953883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.954074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.954106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.954236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.954269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.954374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.954405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.954600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.954632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.954750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.954781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.954888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.954919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.955105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.955137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.955256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.955288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.955465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.955496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.955615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.955647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.955757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.955788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.955912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.955943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.956044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.956076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.956256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.956290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.956502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.956534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.956718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.956749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.956863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.956901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.957020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.957052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.957225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.957258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.957377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.957409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.957586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.957617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.957791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.957821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.958017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.958049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.958195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.958228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.958407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.958439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.958610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.958642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.958815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.958847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.959038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.959069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.959216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.959249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.959398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.959431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.959568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.959600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.959783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.959815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.959934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.959965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.960081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.960112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.960303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.960337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.960449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.960480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.960593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.960625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.960796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.960828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.961007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.961038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.961158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.961200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.961326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.961358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.961558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.961590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.961701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.961732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.961856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.961893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.961997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.962028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.962201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.962234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.962341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.962372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.962482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.962513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.962693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.962724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.962842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.962874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.963042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.963073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.176 [2024-12-10 05:53:46.963197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.176 [2024-12-10 05:53:46.963230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.176 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.963350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.963381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.963500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.963531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.963632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.963664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.963854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.963886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.964034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.964065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.964184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.964217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.964351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.964383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.964492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.964523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.964625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.964657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.964772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.964804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.964919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.964950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.965061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.965093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.965216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.965250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.965429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.965461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.965580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.965612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.965761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.965792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.965893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.965924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.966029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.966061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.966234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.966266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.966383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.966415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.966524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.966555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.966730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.966761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.966878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.966910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.967021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.967052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.967225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.967258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.967373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.967405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.967594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.967626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.967824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.967855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.967963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.967994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.968105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.968137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.968249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.968281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.968394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.968425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.968527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.968574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.968686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.968714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.968894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.968923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.969100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.969132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.969288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.969321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.969446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.969478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.969595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.969643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.969743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.969772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.969872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.969901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.969995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.970024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.970189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.970219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.970406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.970437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.970695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.970727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.970847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.970879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.970999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.971031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.971199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.971232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.971352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.971384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.971565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.971597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.971777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.971808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.971928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.971960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.972064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.972095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.972215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.972248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.972366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.972399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.177 [2024-12-10 05:53:46.972511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.177 [2024-12-10 05:53:46.972543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.177 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.972739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.972768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.973016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.973044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.973164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.973200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.973318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.973347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.973458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.973487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.973585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.973614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.973723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.973751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.973920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.973948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.974130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.974160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.974351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.974381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.974491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.974520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.974756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.974785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.974899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.974928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.975031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.975060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.975252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.975282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.975458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.975487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.975654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.975683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.975854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.975883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.975983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.976012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.976199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.976229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.976341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.976370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.976494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.976523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.976790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.976818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.976944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.976973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.977085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.977114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.977251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.977281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.977390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.977418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.977585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.977613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.977730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.977759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.977878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.977909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.978092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.978123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.978275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.978320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.978505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.978534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.978638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.978680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.978797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.978828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.979031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.979063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.979244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.979276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.979400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.979431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.979552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.979583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.979769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.979801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.979986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.980017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.980128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.980185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.980425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.980456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.980583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.980615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.980723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.980760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.980863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.980895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.981063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.981094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.981212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.981244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.981494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.981526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.981646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.981678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.178 [2024-12-10 05:53:46.981795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.178 [2024-12-10 05:53:46.981825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.178 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.982063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.982094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.982283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.982316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.982509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.982540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.982732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.982763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.982972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.983003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.983132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.983163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.983279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.983311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.983534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.983567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.983678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.983708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.983888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.983920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.984116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.984147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.984262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.984294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.984464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.984495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.984614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.984647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.984774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.984806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.984909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.984940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.985045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.985077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.985191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.985223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.985327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.985359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.985567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.985600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.985864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.985896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.986032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.986064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.986240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.986273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.986454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.986488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.986616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.986647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.986772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.986804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.986988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.987019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.987211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.987245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.987349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.987384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.987504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.987536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.987703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.987735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.987934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.987966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.988074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.988105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.988222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.988255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.988365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.988406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.988617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.988648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.988829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.988860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.988979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.989011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.989188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.989220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.989412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.989443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.989625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.989657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.989830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.989861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.989982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.990012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.990187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.990219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.990329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.990361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.990467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.990498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.990597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.990628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.990737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.990768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.990887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.990919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.991121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.991152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.991280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.991313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.991486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.991517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.991638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.991669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.991910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.991942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.179 [2024-12-10 05:53:46.992121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.179 [2024-12-10 05:53:46.992152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.179 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.992280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.992312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.992556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.992587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.992765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.992797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.992962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.992994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.993190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.993222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.993342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.993373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.993497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.993534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.993738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.993770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.993965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.993996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.994184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.994216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.994317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.994349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.994542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.994573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.994762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.994793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.994977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.995008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.995201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.995234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.995425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.995457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.995649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.995681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.995854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.995885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.996126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.996158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.996278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.996309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.996582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.996614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.996733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.996764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.996892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.996923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.997040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.997071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.997182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.997215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.997327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.997359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.997596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.997628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.997810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.997842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.998039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.998069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.998239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.998272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.998463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.998494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.998595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.998626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.998746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.998777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.998897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.998929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.999049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.999081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.999202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.999234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.999353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.999385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.999564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.999596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.999768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:46.999799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:46.999987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.000019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.000216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.000249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.000353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.000385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.000586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.000617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.000719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.000750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.000928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.000959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.001130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.001162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.001344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.001376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.001482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.001520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.001760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.001791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.001908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.001939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.002129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.002161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.002345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.002378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.002555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.002587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.002717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.002749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.002859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.002890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.180 [2024-12-10 05:53:47.003058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.180 [2024-12-10 05:53:47.003090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.180 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.003213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.003246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.003435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.003466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.003717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.003748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.003867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.003899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.004006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.004038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.004222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.004254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.004457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.004489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.004669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.004700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.004881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.004912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.005041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.005073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.005221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.005255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.005500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.005531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.005700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.005731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.005918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.005950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.006054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.006085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.006336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.006369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.006551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.006583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.006763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.006794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.006963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.007000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.007193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.007227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.007463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.007494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.007671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.007703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.007939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.007971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.008152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.008191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.008382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.008413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.008584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.008616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.008849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.008880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.009088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.009120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.009268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.009301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.009415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.009447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.009638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.009669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.009848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.009880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.010020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.010053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.010310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.010343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.010512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.010543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.010657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.010689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.010940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.010972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.011090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.011121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.011272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.011306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.011495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.011527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.011720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.011751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.011873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.011905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.012107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.012138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.012270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.012302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.012517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.012549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.012686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.012718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.012983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.013014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.181 qpair failed and we were unable to recover it. 00:28:59.181 [2024-12-10 05:53:47.013193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.181 [2024-12-10 05:53:47.013226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.013421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.013453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.013693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.013724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.013837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.013869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.014116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.014148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.014438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.014469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.014709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.014741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.014929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.014960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.015141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.015181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.015362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.015394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.015637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.015669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.015868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.015899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.016174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.016213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.016418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.016451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.016647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.016679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.016917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.016948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.017158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.017201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.017322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.017354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.017553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.017585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.017793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.017824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.018004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.018036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.018159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.018200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.018324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.018355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.018620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.018653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.018762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.018795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.018991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.019024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.019149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.019216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.019388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.019420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.019680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.019712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.019898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.019929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.020163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.020207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.020318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.020351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.020612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.020643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.020768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.020800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.020909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.020941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.021056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.021088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.021287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.021320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.021449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.021482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.021583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.021615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.021797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.021835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.022003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.022035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.022153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.022192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.022381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.022413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.022594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.022626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.022757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.022789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.023035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.023068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1350925 Killed "${NVMF_APP[@]}" "$@" 00:28:59.182 [2024-12-10 05:53:47.023211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.023244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.023356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.023389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.023524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.023556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.023737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.023769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:59.182 [2024-12-10 05:53:47.023940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.023973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.024084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.024115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:59.182 [2024-12-10 05:53:47.024366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.024400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.182 [2024-12-10 05:53:47.024610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.024643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.182 qpair failed and we were unable to recover it. 00:28:59.182 [2024-12-10 05:53:47.024850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.182 [2024-12-10 05:53:47.024883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.183 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.183 qpair failed and we were unable to recover it. 00:28:59.183 [2024-12-10 05:53:47.025080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.183 [2024-12-10 05:53:47.025112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.183 qpair failed and we were unable to recover it. 00:28:59.183 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.183 [2024-12-10 05:53:47.025228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.025261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.025460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.025492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.025670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.025703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.025816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.025847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.026092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.026124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.026378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.026412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.026592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.026625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.026866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.026897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.027077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.027109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.027257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.027291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.027484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.027516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.027636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.027668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.027843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.027876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.028057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.028090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.028196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.028229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.028347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.028379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.028557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.028590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.028701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.028732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.028847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.028877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.028995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.029026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.029142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.029182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.029296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.029328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.029446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.029477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.469 qpair failed and we were unable to recover it. 00:28:59.469 [2024-12-10 05:53:47.029644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.469 [2024-12-10 05:53:47.029674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.029776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.029809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.029911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.029942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.030112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.030143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.030433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.030470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.030714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.030746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.030868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.030900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.031006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.031037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.031175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.031208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.031392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.031424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.031544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.031576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.031759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.031792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.031904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.031942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1351661 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.032061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.032093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.032264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1351661 00:28:59.470 [2024-12-10 05:53:47.032298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:59.470 [2024-12-10 05:53:47.032546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.032579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1351661 ']' 00:28:59.470 [2024-12-10 05:53:47.032751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.032784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.032980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.470 [2024-12-10 05:53:47.033013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.033194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.033227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.470 [2024-12-10 05:53:47.033399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.033431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.033557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.470 [2024-12-10 05:53:47.033590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.033717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.033749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.470 [2024-12-10 05:53:47.033947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.033980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.034093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.034126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.034320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.034355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.034558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.034591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.034803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.034835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.034956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.034989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.035112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.035144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.035371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.035403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.035526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.035558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.035729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.035760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.035940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.035973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.036094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.036127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.036242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.036276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.470 qpair failed and we were unable to recover it. 00:28:59.470 [2024-12-10 05:53:47.036503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.470 [2024-12-10 05:53:47.036535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.036658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.036689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.036863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.036896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.037029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.037060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.037227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.037261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.037363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.037396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.037636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.037669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.037842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.037874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.038064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.038097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.038278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.038311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.038438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.038470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.038709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.038742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.038859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.038890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.039081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.039120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.039312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.039346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.039459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.039490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.039610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.039643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.039836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.039869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.039980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.040011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.040185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.040217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.040388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.040420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.040679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.040712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.040811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.040843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.040982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.041013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.041125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.041158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.041363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.041396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.041563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.041593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.041867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.041899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.042087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.042119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.042367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.042400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.042638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.042672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.042850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.042883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.043135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.043175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.043391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.043424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.043600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.043631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.043893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.043926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.044114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.044146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.044288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.044321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.044448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.044479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.044671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.471 [2024-12-10 05:53:47.044702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.471 qpair failed and we were unable to recover it. 00:28:59.471 [2024-12-10 05:53:47.044919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.044960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.045082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.045115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.045292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.045325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.045433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.045467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.045659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.045691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.045830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.045862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.045992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.046022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.046148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.046193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.046321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.046354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.046542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.046573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.046750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.046780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.046893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.046924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.047052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.047083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.047271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.047303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.047431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.047462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.047633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.047666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.047847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.047880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.048055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.048085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.048217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.048249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.048353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.048384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.048556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.048587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.048773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.048807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.048933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.048964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.049080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.049110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.049310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.049342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.049538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.049570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.049835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.049867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.050125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.050157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.050273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.050306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.050473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.050505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.050707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.050738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.050874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.050905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.051177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.051210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.051438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.051471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.051581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.051613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.051853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.051885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.052019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.052052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.052228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.052260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.052380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.052412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.472 [2024-12-10 05:53:47.052548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.472 [2024-12-10 05:53:47.052581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.472 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.052694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.052729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.052897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.052934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.053109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.053144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.053268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.053299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.053498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.053530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.053706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.053738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.053919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.053951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.054067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.054098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.054265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.054297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.054414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.054446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.054559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.054589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.054767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.054798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.054920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.054951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.055208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.055241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.055529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.055562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.055765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.055797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.055982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.056014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.056223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.056256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.056445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.056477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.056663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.056692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.056805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.056836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.057019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.057051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.057248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.057280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.057467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.057498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.057788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.057820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.057930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.057961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.058130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.058161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.058406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.058438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.058643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.058680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.058870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.058902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.059021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.059051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.059238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.059270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.059392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.059423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.059660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.473 [2024-12-10 05:53:47.059693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.473 qpair failed and we were unable to recover it. 00:28:59.473 [2024-12-10 05:53:47.059892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.059923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.060055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.060087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.060274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.060308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.060579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.060610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.060791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.060821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.060929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.060960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.061083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.061114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.061327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.061359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.061669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.061739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.061934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.061969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.062143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.062190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.062464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.062496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.062668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.062699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.062887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.062918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.063033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.063065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.063301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.063334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.063466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.063499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.063600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.063634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.063766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.063798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.064041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.064074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.064336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.064370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.064631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.064674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.064885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.064917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.065152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.065194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.065373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.065406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.065599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.065631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.065821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.065854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.066046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.066078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.066273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.066307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.066478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.066510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.066684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.066715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.066887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.066919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.067102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.067134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.067324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.067358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.067493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.067525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.067769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.067800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.067916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.067947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.068061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.068094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.474 [2024-12-10 05:53:47.068326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.474 [2024-12-10 05:53:47.068359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.474 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.068567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.068598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.068767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.068799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.068931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.068962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.069194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.069227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.069413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.069445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.069683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.069714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.069893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.069924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.070127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.070160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.070347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.070379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.070684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.070754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.070925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.070963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.071092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.071126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.071416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.071450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.071633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.071665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.071787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.071824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.072067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.072099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.072289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.072322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.072592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.072632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.072757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.072788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.072898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.072931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.073139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.073189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.073376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.073412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.073624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.073669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.073845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.073879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.074046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.074084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.074268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.074307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.074426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.074465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.074586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.074618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.074886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.074918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.075154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.075207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.075389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.075422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.075656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.075725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.075928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.075965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.076092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.076124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.076382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.076415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.076531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.076563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.076743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.076775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.077018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.077049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.077188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.475 [2024-12-10 05:53:47.077220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.475 qpair failed and we were unable to recover it. 00:28:59.475 [2024-12-10 05:53:47.077405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.077435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.077607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.077638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.077740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.077770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.077882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.077913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.078181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.078213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.078477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.078509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.078752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.078784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.078902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.078933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.079205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.079239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.079414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.079445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.079576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.079612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.079798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.079829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.080035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.080067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.080190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.080224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.080338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.080368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.080631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.080663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.080899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.080930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.081034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.081065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.081245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.081277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.081539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.081571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.081821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.081852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.082023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.082054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.082247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.082279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.082540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.082572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.082754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.082791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.082980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.083013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.083285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.083319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.083478] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:28:59.476 [2024-12-10 05:53:47.083509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.083531] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.476 [2024-12-10 05:53:47.083544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.083673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.083704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.083887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.083917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.084115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.084146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.084366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.084396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.084603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.084636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.084904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.084937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.085075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.085108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.085314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.085350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.085465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.085516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.085705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.476 [2024-12-10 05:53:47.085738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.476 qpair failed and we were unable to recover it. 00:28:59.476 [2024-12-10 05:53:47.085926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.085960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.086213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.086248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.086435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.086468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.086645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.086678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.086943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.086975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.087228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.087263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.087403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.087437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.087676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.087710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.087817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.087849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.088025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.088058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.088162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.088205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.088386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.088417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.088613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.088646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.088843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.088875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.089058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.089089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.089225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.089258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.089447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.089478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.089676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.089708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.089908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.089940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.090110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.090142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.090329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.090364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.090497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.090529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.090641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.090673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.090862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.090895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.091067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.091098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.091298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.091332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.091504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.091536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.091665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.091697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.091886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.091918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.092051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.092084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.092265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.092299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.092549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.092581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.092691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.092724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.092853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.092884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.092988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.093020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.093255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.093288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.093467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.093499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.093617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.093649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.093772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.093810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.477 qpair failed and we were unable to recover it. 00:28:59.477 [2024-12-10 05:53:47.093980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.477 [2024-12-10 05:53:47.094012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.094143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.094183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.094362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.094393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.094508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.094540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.094659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.094691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.094880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.094913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.095097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.095129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.095242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.095275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.095377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.095409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.095520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.095551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.095736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.095768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.095890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.095923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.096181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.096213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.096396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.096428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.096633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.096666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.096868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.096901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.097078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.097110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.097407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.097442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.097547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.097579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.097707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.097739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.097866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.097912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.098155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.098201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.098378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.098411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.098528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.098571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.098693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.098728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.098830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.098862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.099103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.099138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.099322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.099355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.099531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.099564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.099672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.099704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.099902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.099935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.100188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.100221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.100358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.100389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.100494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.100525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.478 [2024-12-10 05:53:47.100729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.478 [2024-12-10 05:53:47.100761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.478 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.100946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.100978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.101157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.101199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.101327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.101363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.101541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.101573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.101709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.101747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.101918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.101950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.102054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.102086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.102193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.102227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.102476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.102508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.102683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.102714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.102831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.102863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.103069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.103102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.103340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.103373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.103609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.103642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.103813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.103845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.104031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.104063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.104243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.104277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.104563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.104595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.104818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.104851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.104966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.104998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.105184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.105216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.105347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.105379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.105509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.105542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.105802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.105833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.105970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.106001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.106105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.106137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.106429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.106471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.106594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.106633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.106735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.106766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.106876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.106908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.107092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.107124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.107278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.107324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.107450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.107484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.107673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.107707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.107897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.107928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.108066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.108099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.108226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.108260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.108474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.108507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.108706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.479 [2024-12-10 05:53:47.108739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.479 qpair failed and we were unable to recover it. 00:28:59.479 [2024-12-10 05:53:47.108935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.108967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.109205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.109238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.109477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.109509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.109631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.109663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.109842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.109874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.110132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.110184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.110442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.110476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.110665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.110698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.110877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.110909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.111127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.111160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.111416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.111450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.111716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.111749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.111920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.111952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.112215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.112249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.112440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.112473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.112645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.112677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.112934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.112967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.113155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.113197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.113370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.113402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.113674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.113707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.113883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.113916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.114186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.114219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.114345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.114377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.114499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.114531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.114721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.114752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.114920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.114952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.115190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.115224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.115342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.115375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.115491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.115523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.115732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.115764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.116024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.116057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.116189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.116222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.116496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.116532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.116775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.116807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.116977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.117009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.117141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.117193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.117321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.117353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.117526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.117557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.117845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.480 [2024-12-10 05:53:47.117877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.480 qpair failed and we were unable to recover it. 00:28:59.480 [2024-12-10 05:53:47.118068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.118100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.118228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.118260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.118498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.118530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.118728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.118760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.118934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.118966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.119209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.119241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.119415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.119454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.119719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.119750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.119958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.119990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.120178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.120211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.120393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.120424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.120610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.120642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.120906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.120938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.121225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.121257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.121389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.121422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.121538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.121570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.121769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.121800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.121919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.121951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.122060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.122091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.122211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.122243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.122437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.122469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.122599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.122631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.122744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.122776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.122952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.122984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.123282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.123315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.123504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.123536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.123710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.123741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.123928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.123959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.124073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.124104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.124297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.124330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.124498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.124529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.124641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.124673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.124794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.124826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.125050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.125089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.125207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.125242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.125482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.125514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.125697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.125728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.125966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.125998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.126096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.481 [2024-12-10 05:53:47.126127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.481 qpair failed and we were unable to recover it. 00:28:59.481 [2024-12-10 05:53:47.126386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.126419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.126550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.126582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.126765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.126797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.127030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.127062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.127273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.127306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.127489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.127520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.127692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.127723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.127913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.127951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.128114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.128146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.128341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.128373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.128536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.128567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.128745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.128777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.128891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.128922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.129051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.129083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.129319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.129352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.129547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.129579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.129780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.129811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.129989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.130021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.130308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.130341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.130610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.130642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.130876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.130908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.131090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.131122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.131301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.131334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.131521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.131552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.131752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.131784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.131910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.131941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.132147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.132187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.132377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.132409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.132532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.132563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.132831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.132862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.133052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.133083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.133222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.133255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.133454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.133486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.133659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.133690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.133893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.133935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.482 qpair failed and we were unable to recover it. 00:28:59.482 [2024-12-10 05:53:47.134129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.482 [2024-12-10 05:53:47.134161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.134429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.134461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.134574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.134605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.134717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.134748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.134940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.134971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.135181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.135214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.135388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.135419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.135620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.135651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.135785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.135816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.136002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.136033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.136140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.136178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.136360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.136391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.136566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.136597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.136711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.136742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.136908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.136940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.137113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.137145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.137325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.137357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.137459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.137489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.137688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.137720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.137982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.138014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.138197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.138230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.138357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.138387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.138679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.138710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.138994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.139025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.139204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.139237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.139368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.139398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.139589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.139627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.139870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.139901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.140108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.140140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.140268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.140301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.140535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.140566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.140752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.140783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.140914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.140944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.141152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.141192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.141456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.141488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.141658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.141689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.141811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.141844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.141976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.142007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.142193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.142227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.142477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.483 [2024-12-10 05:53:47.142509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.483 qpair failed and we were unable to recover it. 00:28:59.483 [2024-12-10 05:53:47.142690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.142722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.142840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.142872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.143053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.143090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.143302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.143336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.143524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.143556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.143680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.143713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.143847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.143878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.144060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.144090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.144315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.144347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.144474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.144505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.144701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.144733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.144994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.145026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.145287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.145321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.145491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.145523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.145726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.145758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.145980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.146011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.146195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.146228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.146508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.146540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.146718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.146749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.146882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.146912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.147084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.147116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.147310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.147343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.147531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.147562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.147828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.147860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.147981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.148012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.148185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.148217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.148396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.148427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.148621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.148661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.148836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.148868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.148974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.149005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.149189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.149222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.149485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.149517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.149690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.149721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.149891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.149922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.150218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.150251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.150353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.150385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.150560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.150592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.150776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.150807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.150975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.151006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.484 [2024-12-10 05:53:47.151196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.484 [2024-12-10 05:53:47.151230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.484 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.151416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.151454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.151566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.151597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.151723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.151754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.151882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.151915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.152046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.152077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.152308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.152345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.152461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.152493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.152698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.152729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.152931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.152962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.153142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.153185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.153364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.153396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.153594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.153625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.153894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.153925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.154054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.154094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.154354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.154388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.154494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.154525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.154725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.154756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.154995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.155026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.155213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.155245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.155417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.155448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.155649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.155681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.155889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.155920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.156026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.156057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.156195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.156228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.156429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.156459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.156575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.156606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.156799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.156831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.157021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.157060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.157263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.157299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.157415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.157447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.157618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.157649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.157848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.157880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.158059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.158090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.158294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.158326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.158495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.158526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.158724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.158755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.159010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.159041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.159246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.159280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.485 [2024-12-10 05:53:47.159404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.485 [2024-12-10 05:53:47.159435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.485 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.159623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.159654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.159765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.159796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.159979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.160011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.160273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.160307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.160436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.160468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.160673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.160705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.160890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.160922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.161063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.161094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.161360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.161393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.161524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.161555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.161792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.161823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.162011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.162043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.162174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.162206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.162380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.162411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.162649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.162681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.162870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.162903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.163014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.163046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.163254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.163287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.163402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.163433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.163530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.163561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.163732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.163763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.163934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.163966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.164211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.164243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.164463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.164494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.164706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.164737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.164977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.165009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.165209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.165242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.165441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.165472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.165652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.165690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.165879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.165911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.166040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.166071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.166238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.166271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.166376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.486 [2024-12-10 05:53:47.166476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.166508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.166626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.166657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.166783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.166814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.167002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.167035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.167271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.167303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.167508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.167540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.486 [2024-12-10 05:53:47.167724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.486 [2024-12-10 05:53:47.167756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.486 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.167964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.167995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.168176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.168209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.168458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.168496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.168703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.168734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.168952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.168984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.169195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.169229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.169482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.169514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.169718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.169750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.169943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.169974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.170157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.170198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.170375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.170408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.170539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.170570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.170829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.170861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.170975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.171006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.171248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.171281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.171484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.171517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.171707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.171739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.171958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.171990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.172203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.172237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.172428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.172460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.172599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.172631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.172744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.172775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.172993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.173025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.173212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.173247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.173360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.173392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.173513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.173544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.173724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.173756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.173870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.173902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.174084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.174117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.174241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.174274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.174515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.174548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.174727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.174760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.174887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.174919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.175026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.487 [2024-12-10 05:53:47.175059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.487 qpair failed and we were unable to recover it. 00:28:59.487 [2024-12-10 05:53:47.175195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.175229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.175411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.175444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.175565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.175598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.175712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.175745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.175918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.175952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.176210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.176244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.176358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.176390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.176495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.176526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.176643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.176682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.176797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.176829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.176998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.177030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.177234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.177268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.177378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.177410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.177526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.177559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.177674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.177706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.177916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.177952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.178197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.178231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.178349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.178381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.178571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.178602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.178731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.178762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.178869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.178900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.179154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.179194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.179373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.179405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.179585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.179617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.179719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.179750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.179949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.179980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.180083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.180115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.180393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.180426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.180614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.180645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.180893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.180924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.181129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.181161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.181448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.181480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.181691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.181723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.181895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.181926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.182163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.182205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.182402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.182435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.182611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.182642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.182817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.182849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.182975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.488 [2024-12-10 05:53:47.183007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.488 qpair failed and we were unable to recover it. 00:28:59.488 [2024-12-10 05:53:47.183246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.183279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.183407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.183439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.183618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.183649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.183828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.183860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.183961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.183992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.184249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.184281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.184464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.184496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.184663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.184693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.184880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.184912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.185042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.185079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.185197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.185230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.185483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.185514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.185687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.185719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.185916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.185948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.186187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.186218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.186473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.186504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.186675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.186706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.186903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.186934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.187186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.187219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.187390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.187422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.187663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.187694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.187960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.187991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.188180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.188212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.188454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.188486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.188659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.188690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.188822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.188853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.189092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.189122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.189250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.189283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.189469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.189501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.189688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.189719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.189835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.189866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.190049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.190081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.190352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.190385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.190567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.190599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.190792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.190823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.191005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.191036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.191245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.191278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.191471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.191502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.191681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.489 [2024-12-10 05:53:47.191712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.489 qpair failed and we were unable to recover it. 00:28:59.489 [2024-12-10 05:53:47.191885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.191917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.192101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.192132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.192331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.192361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.192484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.192516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.192758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.192788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.192967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.192998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.193202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.193234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.193470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.193504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.193690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.193722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.193914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.193946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.194125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.194163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.194417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.194450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.194570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.194603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.194886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.194921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.195127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.195160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.195361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.195395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.195536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.195569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.195756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.195789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.195975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.196009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.196186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.196220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.196459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.196492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.196723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.196756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.196995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.197028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.197207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.197241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.197433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.197467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.197640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.197673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.197911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.197944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.198180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.198214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.198455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.198488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.198657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.198690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.198886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.198918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.199131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.199164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.199369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.199403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.199534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.199567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.199798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.199830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.200096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.200129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.200344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.200378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.200591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.200624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.200909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.200942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.490 qpair failed and we were unable to recover it. 00:28:59.490 [2024-12-10 05:53:47.201129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.490 [2024-12-10 05:53:47.201162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.201317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.201354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.201560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.201595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.201764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.201798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.201983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.202016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.202192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.202227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.202446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.202480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.202716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.202748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.202866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.202899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.203012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.203045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.203243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.203277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.203447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.203486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.203660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.203692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.203901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.203933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.204149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.204191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.204387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.204425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.204602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.204637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.204744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.204779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.205020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.205055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.205246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.205279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.205471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.205505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.205625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.205659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.205773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.205806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.205928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.205962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.206100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.206135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.206330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.206366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.206621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.206655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.206856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.206890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.207133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.207178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.207282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.491 [2024-12-10 05:53:47.207320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.491 [2024-12-10 05:53:47.207328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.491 [2024-12-10 05:53:47.207336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.491 [2024-12-10 05:53:47.207341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.491 [2024-12-10 05:53:47.207367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.207398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.207520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.207550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.207725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.207757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.207930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.207961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.491 [2024-12-10 05:53:47.208073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.491 [2024-12-10 05:53:47.208106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.491 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.208250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.208283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.208544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.208577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.208795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.208847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.208846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:59.492 [2024-12-10 05:53:47.208977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.208947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:59.492 [2024-12-10 05:53:47.209010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.209054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:59.492 [2024-12-10 05:53:47.209122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.209154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.209055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:59.492 [2024-12-10 05:53:47.209287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.209321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.209509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.209541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.209749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.209783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.209955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.209988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.210185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.210219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.210354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.210388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.210573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.210607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.210731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.210764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.211028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.211063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.211204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.211243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.211419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.211453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.211579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.211613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.211803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.211837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.211960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.211993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.212131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.212163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.212369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.212401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.212581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.212615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.212790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.212824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.213035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.213068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.213189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.213224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.213357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.213390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.213573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.213606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.213726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.213759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.213959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.213994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.214136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.214178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.214313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.214347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.214452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.214486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.214750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.214784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.214979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.215014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.215148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.215211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.215437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.215471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.215609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.215643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-12-10 05:53:47.215753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.492 [2024-12-10 05:53:47.215788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.215960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.215995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.216260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.216295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.216467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.216500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.216699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.216733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.216915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.216949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.217192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.217227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.217339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.217372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.217554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.217587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.217773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.217807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.217939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.217973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.218195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.218230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.218404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.218438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.218632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.218665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.218781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.218814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.218931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.218965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.219252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.219288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.219551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.219593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.219766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.219799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.219971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.220004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.220192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.220227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.220398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.220432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.220607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.220641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.220752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.220784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.220893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.220928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.221106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.221139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.221274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.221310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.221482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.221517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.221754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.221786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.221904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.221937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.222114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.222148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.222353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.222387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.222575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.222609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.222805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.222840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.223024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.223057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.223176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.223212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.223383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.223416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.223543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.223577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.223765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.223797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-12-10 05:53:47.223971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.493 [2024-12-10 05:53:47.224005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.224191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.224229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.224495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.224530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.224659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.224694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.224804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.224838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.224970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.225005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.225137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.225179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.225372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.225406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.225576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.225611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.225721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.225755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.225931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.225964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.226101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.226136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.226287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.226348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.226469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.226501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.226684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.226717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.226854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.226887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.227002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.227035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.227209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.227244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.227347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.227389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.227649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.227682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.227798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.227831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.227955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.227988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.228102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.228136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.228267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.228301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.228412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.228445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.228685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.228719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.228842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.228875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.228995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.229028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.229287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.229322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.229442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.229475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.229596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.229629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.229746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.229779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.229966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.230000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.230242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.230277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.230463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.230497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.230671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.230704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.230877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.230911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.231095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.231129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.231340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.231376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-12-10 05:53:47.231569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.494 [2024-12-10 05:53:47.231603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.231814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.231848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.231961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.231994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.232104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.232138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.232345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.232393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.232516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.232551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.232733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.232767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.232958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.232991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.233195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.233231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.233351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.233384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.233586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.233619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.233754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.233789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.233976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.234010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.234191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.234225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.234332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.234365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.234480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.234514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.234764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.234798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.234931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.234965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.235144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.235188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.235297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.235337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.235516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.235551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.235656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.235690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.235878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.235912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.236085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.236118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.236240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.236274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.236521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.236556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.236737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.236770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.236942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.236976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.237111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.237146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.237462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.237500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.237692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.237727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.237992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.238025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.238219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.238276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.238462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.238496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.238683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.238717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.238898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.495 [2024-12-10 05:53:47.238932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.495 qpair failed and we were unable to recover it. 00:28:59.495 [2024-12-10 05:53:47.239106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.239140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.239269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.239304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.239477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.239512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.239706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.239740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.239854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.239887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.240003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.240037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.240209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.240244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.240354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.240387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.240585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.240618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.240810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.240843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.241104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.241138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.241282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.241316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.241436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.241471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.241709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.241743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.241925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.241958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.242065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.242098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.242215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.242251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.242363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.242396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.242529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.242561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.242747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.242781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.242979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.243013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.243121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.243155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.243285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.243320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.243453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.243492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.243686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.243721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.243894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.243929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.244123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.244157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.244286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.244320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.244557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.244590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.244876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.244911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.245096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.245130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.245245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.245279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.245474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.245507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.245754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.245787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.246051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.246085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.246274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.246309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.246495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.246528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.246797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.246832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.496 [2024-12-10 05:53:47.247018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.496 [2024-12-10 05:53:47.247051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.496 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.247245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.247279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.247515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.247548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.247733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.247764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.248006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.248039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.248255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.248289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.248470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.248504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.248687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.248721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.248905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.248938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.249130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.249179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.249297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.249331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.249469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.249504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.249625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.249660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.249855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.249889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.250098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.250133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.250301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.250364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.250613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.250655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.250770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.250804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.250985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.251018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.251151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.251197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.251314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.251348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.251474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.251506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.251687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.251720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.251904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.251937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.252053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.252086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.252263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.252305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.252494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.252527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.252667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.252700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.252820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.252853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.253118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.253151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.253362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.253396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.253586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.253618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.253747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.253780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.253963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.253996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.254188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.254222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.254349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.254382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.254563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.254596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.254715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.254748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.254987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.255020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.255232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.497 [2024-12-10 05:53:47.255267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.497 qpair failed and we were unable to recover it. 00:28:59.497 [2024-12-10 05:53:47.255441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.255474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.255601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.255633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.255823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.255856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.255970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.256004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.256250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.256283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.256463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.256497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.256600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.256634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.256892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.256925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.257195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.257229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.257428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.257461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.257600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.257633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.257871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.257904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.258115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.258149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.258364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.258398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.258639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.258672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.258909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.258942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.259134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.259180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.259351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.259384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.259591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.259624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.259752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.259785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.259968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.260000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.260192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.260226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.260342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.260375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.260528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.260561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.260678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.260711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.260819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.260863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.260973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.261006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.261122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.261155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.261283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.261317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.261467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.261501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.261675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.261709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.261883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.261917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.262089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.262123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.262238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.262272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.262379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.262413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.262519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.262553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.262814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.262849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.263024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.263058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.263255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.498 [2024-12-10 05:53:47.263290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.498 qpair failed and we were unable to recover it. 00:28:59.498 [2024-12-10 05:53:47.263417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.263452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.263637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.263670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.263935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.263968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.264081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.264115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.264370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.264407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.264582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.264615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.264794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.264827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.265088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.265121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.265253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.265288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.265472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.265506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.265767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.265802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.265943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.265979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.266163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.266206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.266412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.266463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.266677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.266710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.266849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.266881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.267052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.267085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.267258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.267293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.267553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.267585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.267823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.267856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.268115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.268148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.268291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.268324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.268565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.268598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.268780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.268812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.268983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.269017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.269136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.269181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.269351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.269394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.269598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.269631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.269896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.269928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.270138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.270180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.270370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.270403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.270589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.270622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.270734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.270768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.270939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.270972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.271177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.271212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.271381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.271414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.271624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.271657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.271864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.271897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.272136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.499 [2024-12-10 05:53:47.272179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.499 qpair failed and we were unable to recover it. 00:28:59.499 [2024-12-10 05:53:47.272362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.272396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.272594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.272628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.272843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.272876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.273064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.273097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.273311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.273347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.273523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.273556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.273746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.273779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.273899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.273933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.274200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.274233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.274428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.274461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.274653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.274686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.274923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.274956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.275273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.275308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.275495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.275529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.275781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.275825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.276093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.276128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.276342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.276379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.276504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.276537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.276837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.276871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.277056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.277089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.277277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.277311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.277576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.277609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.277796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.277829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.278077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.278110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.278321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.278356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.278595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.278629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.278838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.278872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.279051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.279091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.279329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.279363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.279603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.279636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.279804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.279837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.280024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.280058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.280274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.280307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.280517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.280551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.280741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.500 [2024-12-10 05:53:47.280774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.500 qpair failed and we were unable to recover it. 00:28:59.500 [2024-12-10 05:53:47.280957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.280989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.281187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.281221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.281422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.281456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.281748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.281781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.281973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.282006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.282286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.282321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.282567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.282601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.282795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.282828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.283007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.283041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.283182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.283217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.283409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.283443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.283582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.283617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.283799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.283832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.283947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.283981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.284186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.284220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.284395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.284430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.284548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.284581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.284775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.284808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.284937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.284971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.285101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.285148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.285386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.285422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.285612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.285645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.285774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.285806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.286055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.286087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.286284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.286318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.286495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.286528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.286652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.286686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.286895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.286928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.287041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.287074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.287209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.287243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.287434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.287469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.287653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.287686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.287820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.287853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.287989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.288022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.288136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.288177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.288355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.288388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.288593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.288625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.288885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.288917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.289100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.289133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.501 [2024-12-10 05:53:47.289255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.501 [2024-12-10 05:53:47.289294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.501 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.289501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.289534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.289723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.289755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.289873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.289906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.290162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.290206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.290451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.290484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.290608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.290641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.290865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.290905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.291119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.291152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.291289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.291323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.291515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.291549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.291788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.291822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.292093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.292127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.292354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.292389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.292648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.292681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.292859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.292892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.293107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.293141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.293413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.293448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.293636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.293669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.293856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.293890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.294077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.294110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.294298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.294333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.294520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.294554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.294746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.294779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.294963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.294996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.295189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.295225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.295465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.295499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.295710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.295743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.295978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.296012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.296254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.296289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.296418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.296451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.296676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.296710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.296896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.296930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.297122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.297155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.297305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.297341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.297512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.297545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.297786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.297820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.297996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.298030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.298272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.298307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.298497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.502 [2024-12-10 05:53:47.298530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.502 qpair failed and we were unable to recover it. 00:28:59.502 [2024-12-10 05:53:47.298663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.298697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.298886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.298921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.299162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.299223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.299344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.299378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.299554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.299587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.299771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.299804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.299990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.300023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.300217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.300258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.503 [2024-12-10 05:53:47.300498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.300532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.300652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.300687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:59.503 [2024-12-10 05:53:47.300945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.300980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.301116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.503 [2024-12-10 05:53:47.301149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.301406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.503 [2024-12-10 05:53:47.301441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.301554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.301588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.503 [2024-12-10 05:53:47.301719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.301753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.301938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.301973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.302142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.302183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.302291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.302328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.302433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.302471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.302578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.302611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.302812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.302844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.303056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.303089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.303327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.303361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.303543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.303576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.303843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.303876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.303981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.304013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.304198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.304232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.304347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.304380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.304571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.304605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.304737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.304770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.304969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.305001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.305217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.305250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.305366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.305400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.305521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.305554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.305755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.305789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.305923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.305955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.306079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.503 [2024-12-10 05:53:47.306112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.503 qpair failed and we were unable to recover it. 00:28:59.503 [2024-12-10 05:53:47.306318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.306351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.306592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.306625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.306836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.306868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.307107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.307139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.307334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.307368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.307490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.307521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.307778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.307809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.307934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.307967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.308158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.308202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.308333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.308367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.308497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.308528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.308663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.308694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.308912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.308943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.309150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.309188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.309311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.309342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.309512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.309544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.309731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.309763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.309963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.309995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.310127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.310159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.310304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.310335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.310599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.310631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.310773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.310810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.311050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.311081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.311209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.311242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.311364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.311395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.311577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.311609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.311811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.311842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.312024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.312056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.312175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.312207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.312451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.312482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.312595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.312630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.312743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.312777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.312886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.312917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.504 [2024-12-10 05:53:47.313043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.504 [2024-12-10 05:53:47.313075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.504 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.313323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.313356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.313541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.313574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.313698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.313730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.313867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.313899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.314010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.314042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.314178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.314210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.314430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.314461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.314589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.314621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.314755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.314788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.314932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.314963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.315140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.315185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.315356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.315389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.315586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.315617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.315825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.315856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.316067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.316113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.316372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.316411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.316625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.316658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.316776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.316807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.316912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.316944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.317125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.317156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.317294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.317327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.317555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.317588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.317776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.317807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.317920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.317955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.318203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.318235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.318357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.318388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.318509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.318540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.318648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.318685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.318817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.318848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.318971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.319002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.319180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.319212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.319394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.319427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.319551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.319583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.319791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.319822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.320016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.320046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.320188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.320221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.320416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.320448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.320621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.320653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.320779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.505 [2024-12-10 05:53:47.320811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.505 qpair failed and we were unable to recover it. 00:28:59.505 [2024-12-10 05:53:47.320930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.320963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.321151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.321191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.321314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.321347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.321529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.321561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.321676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.321707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.321886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.321918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.322092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.322123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.322262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.322295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.322402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.322433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.322639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.322670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.322846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.322879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.323001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.323032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.323319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.323353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.323489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.323520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.323641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.323673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.323899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.323939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.324135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.324181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.324295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.324327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.324442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.324473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.324642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.324672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.324799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.324831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.324947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.324978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.325218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.325252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.325381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.325412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.325532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.325562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.325677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.325708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.325835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.325866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.325985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.326016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.326207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.326238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.326432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.326465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.326709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.326742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.326852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.326884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.327003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.327034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.327229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.327262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.327370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.327402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.327516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.327547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.327662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.327695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.327902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.327934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.506 [2024-12-10 05:53:47.328102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.506 [2024-12-10 05:53:47.328134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.506 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.328251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.328286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.328432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.328463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.328581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.328612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.328782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.328819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.328938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.328969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.329180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.329212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.329319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.329351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.329464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.329494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.329611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.329641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.329845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.329876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.329998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.330028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.330152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.330193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.330304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.330336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.330512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.330543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.330664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.330696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.330867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.330898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.331001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.331032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.331156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.331215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.331325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.331357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.331531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.331563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.331744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.331774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.331912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.331942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.332068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.332099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.332207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.332240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.507 [2024-12-10 05:53:47.332433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.507 [2024-12-10 05:53:47.332465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.507 qpair failed and we were unable to recover it. 00:28:59.772 [2024-12-10 05:53:47.332654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.772 [2024-12-10 05:53:47.332685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.772 qpair failed and we were unable to recover it. 00:28:59.772 [2024-12-10 05:53:47.332882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.772 [2024-12-10 05:53:47.332913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.772 qpair failed and we were unable to recover it. 00:28:59.772 [2024-12-10 05:53:47.333036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.772 [2024-12-10 05:53:47.333067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.772 qpair failed and we were unable to recover it. 00:28:59.772 [2024-12-10 05:53:47.333309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.772 [2024-12-10 05:53:47.333343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.772 qpair failed and we were unable to recover it. 00:28:59.772 [2024-12-10 05:53:47.333466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.772 [2024-12-10 05:53:47.333498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.772 qpair failed and we were unable to recover it. 00:28:59.772 [2024-12-10 05:53:47.333628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.772 [2024-12-10 05:53:47.333661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.772 qpair failed and we were unable to recover it. 00:28:59.772 [2024-12-10 05:53:47.333766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.772 [2024-12-10 05:53:47.333797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.772 qpair failed and we were unable to recover it. 00:28:59.772 [2024-12-10 05:53:47.333912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.772 [2024-12-10 05:53:47.333943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.772 qpair failed and we were unable to recover it. 00:28:59.772 [2024-12-10 05:53:47.334113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.772 [2024-12-10 05:53:47.334144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.772 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.334263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.334296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.334400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.334433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.334620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.334651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.334889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.334921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.335043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.335072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.335258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.335290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.335462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.335494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.335679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.335709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.335996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.336028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.336142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.336194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.336329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.336361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.336477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.336507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.336617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.336648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.773 [2024-12-10 05:53:47.336831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.336865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.337034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.337064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.337246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:59.773 [2024-12-10 05:53:47.337279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.337401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.337432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.773 [2024-12-10 05:53:47.337633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.337665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.337833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.337870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.773 [2024-12-10 05:53:47.338042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.338073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.338206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.338237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.338360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.338392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.338526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.338557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.338736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.338766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.338962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.338994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.339122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.339153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.339334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.339365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.339498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.339529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.339742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.339773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.339960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.339990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.340123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.340154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.340349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.340381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.340563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.340593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.340774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.340805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.340940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.340980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.341224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.773 [2024-12-10 05:53:47.341258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.773 qpair failed and we were unable to recover it. 00:28:59.773 [2024-12-10 05:53:47.341380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.341413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.341535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.341567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.341676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.341708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.341818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.341850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.341958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.341989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.342091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.342123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.342244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.342277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.342388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.342420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.342526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.342558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.342687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.342717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.342839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.342870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.342979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.343019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.343136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.343177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.343358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.343390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.343560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.343592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.343857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.343888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.344082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.344114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.344248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.344289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.344403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.344442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.344626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.344672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.344795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.344827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.344944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.344974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.345079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.345111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.345239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.345272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.345383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.345414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.345615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.345646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.345774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.345813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.345923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.345954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.346113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.346159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.346299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.346332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.346509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.346540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.346784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.346815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.346989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.347019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.347131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.347185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.347313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.347345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.347473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.347505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.347681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.774 [2024-12-10 05:53:47.347714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.774 qpair failed and we were unable to recover it. 00:28:59.774 [2024-12-10 05:53:47.347907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.347939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.348127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.348179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.348388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.348422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.348528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.348559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.348662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.348693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.348878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.348910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.349097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.349128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.349314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.349347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.349518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.349549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.349675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.349705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.349892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.349924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.350052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.350083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.350302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.350334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.350437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.350468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.350597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.350629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.350742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.350774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.350959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.350991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.351198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.351231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.351424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.351456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.351562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.351593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.351857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.351887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.352008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.352038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.352152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.352195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.352385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.352417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.352594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.352625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.352879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.352910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.353118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.353149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.353269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.353301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.353477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.353515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.353625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.353656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.353843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.353874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.354067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.354100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.354341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.354376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.354588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.354620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.354791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.354823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.355012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.355043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.355233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.355266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.355453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.355484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.355680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.355711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.775 [2024-12-10 05:53:47.355843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.775 [2024-12-10 05:53:47.355874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.775 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.356000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.356032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.356204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.356236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.356437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.356469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.356642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.356675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.356892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.356926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.357126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.357158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.357312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.357345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.357592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.357624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.357737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.357768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.357970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.358003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.358120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.358154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.358340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.358374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.358588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.358621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.358862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.358895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.359155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.359201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.359468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.359500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.359685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.359718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.359896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.359930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.360202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.360236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.360458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.360490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.360626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.360658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.360840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.360872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.361123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.361154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.361278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.361309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.361521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.361553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.361754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.361786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.361994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.362026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.362270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.362302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.362427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.362459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.362674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.362722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.363009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.363041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 Malloc0 00:28:59.776 [2024-12-10 05:53:47.363256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.363291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.363471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.363502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.363641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.363672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.363845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.363875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.776 [2024-12-10 05:53:47.364111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.364143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.364339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.364372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.776 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 [2024-12-10 05:53:47.364499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.776 [2024-12-10 05:53:47.364530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.776 qpair failed and we were unable to recover it. 00:28:59.776 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.777 [2024-12-10 05:53:47.364795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.364826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.777 [2024-12-10 05:53:47.365083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.365115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.365366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.365399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.365617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.365650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.365859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.365891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.366148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.366192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.366328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.366360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.366545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.366577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.366816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.366848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.367042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.367073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.367268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.367301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.367549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.367613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.367899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.367934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.368130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.368162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.368423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.368454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.368634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.368667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.368862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.368936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.369222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.369257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.369524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.369556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.369731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.369762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.369903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.369935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.370122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.370153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.370278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.370310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.370574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.370605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.370819] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.777 [2024-12-10 05:53:47.370851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.370882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.371119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.371150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.371426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.371458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.371587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.371619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.371802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.371833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.372108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.372147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.372293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.372337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.372521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.372554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.777 [2024-12-10 05:53:47.372740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.777 [2024-12-10 05:53:47.372772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.777 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.373036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.373067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.373315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.373351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.373497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.373537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.373801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.373833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.373949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.373980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.374086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.374117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.374388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.374421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.374607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.374639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.374903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.374935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4758000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.375150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.375194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.375326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.375360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.375530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.375563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.375735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.375766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.376029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.376061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.376305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.376338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.376444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.376476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.376713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.376745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.376948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.376980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.377246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.377280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.377483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.377515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.377644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.377677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.377870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.377902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.378151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.378206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.378391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.378424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.378691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.378723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.378967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.379000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.379185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.379218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.379430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.379461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.778 [2024-12-10 05:53:47.379650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.379683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:59.778 [2024-12-10 05:53:47.379934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.379966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.380226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.380260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.380467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.380499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.778 [2024-12-10 05:53:47.380739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.380771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.380945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.380977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.381116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.381161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.381344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.381376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.381565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.778 [2024-12-10 05:53:47.381598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.778 qpair failed and we were unable to recover it. 00:28:59.778 [2024-12-10 05:53:47.381720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.381752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.381852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.381883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.382064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.382096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.382372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.382406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.382581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.382613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.382758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.382789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.383039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.383071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.383295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.383328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.383498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.383529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.383656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.383688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.383875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.383907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.384042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.384074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.384191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.384224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.384482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.384513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.384652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.384684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.384867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.384898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.385137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.385181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.385377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.385409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.385599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.385631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.385802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.385833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.386021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.386052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.386227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.386261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.386460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.386491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.386751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.386783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.386963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.386995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.387119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.387151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.387417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.387449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.779 [2024-12-10 05:53:47.387627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.387660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.387783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.387814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:59.779 [2024-12-10 05:53:47.387949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.387981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.388246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.779 [2024-12-10 05:53:47.388281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.388400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.388431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.388548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.779 [2024-12-10 05:53:47.388580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.388845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.388877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.389020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.389052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.389184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.389224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.779 [2024-12-10 05:53:47.389484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-12-10 05:53:47.389517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.779 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.389685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.389717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.389901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.389933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.390103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.390136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.390325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.390358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.390602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.390634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.390899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.390931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.391109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.391140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.391449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.391482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.391591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.391623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.391862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.391893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.392029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.392061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.392189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.392222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.392402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.392434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.392694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.392726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.392919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.392951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.393128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.393160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.393368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.393401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.393640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.393671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.393879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.393910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.394032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.394063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.394202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.394235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.394418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.394449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.394642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.394674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.394934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.394965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.395214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.395247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4754000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.395541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.780 [2024-12-10 05:53:47.395604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.395897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.395933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.780 [2024-12-10 05:53:47.396071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.396103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.396277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.396310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cf1a0 with addr=10.0.0.2, port=4420 00:28:59.780 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.396504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.396542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.780 [2024-12-10 05:53:47.396755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.396787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.397031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.397061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.397236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.397270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.397451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.397483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.397762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.397793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.397965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.397996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.398112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.780 [2024-12-10 05:53:47.398144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.780 qpair failed and we were unable to recover it. 00:28:59.780 [2024-12-10 05:53:47.398340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.781 [2024-12-10 05:53:47.398379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 [2024-12-10 05:53:47.398501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.781 [2024-12-10 05:53:47.398533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 [2024-12-10 05:53:47.398737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.781 [2024-12-10 05:53:47.398769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 [2024-12-10 05:53:47.398946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.781 [2024-12-10 05:53:47.398978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4760000b90 with addr=10.0.0.2, port=4420 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 [2024-12-10 05:53:47.399058] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.781 [2024-12-10 05:53:47.401484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.781 [2024-12-10 05:53:47.401602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.781 [2024-12-10 05:53:47.401646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.781 [2024-12-10 05:53:47.401669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.781 [2024-12-10 05:53:47.401691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4760000b90 00:28:59.781 [2024-12-10 05:53:47.401743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.781 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:59.781 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.781 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.781 [2024-12-10 05:53:47.411403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.781 [2024-12-10 05:53:47.411516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.781 [2024-12-10 05:53:47.411554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.781 [2024-12-10 05:53:47.411575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.781 [2024-12-10 05:53:47.411596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4760000b90 00:28:59.781 [2024-12-10 05:53:47.411642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.781 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 05:53:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1350987 00:28:59.781 [2024-12-10 05:53:47.421408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.781 [2024-12-10 05:53:47.421513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.781 [2024-12-10 05:53:47.421569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.781 [2024-12-10 05:53:47.421593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.781 [2024-12-10 05:53:47.421615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.781 [2024-12-10 05:53:47.421667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 [2024-12-10 05:53:47.431403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.781 [2024-12-10 05:53:47.431495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.781 [2024-12-10 05:53:47.431522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.781 [2024-12-10 05:53:47.431537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.781 [2024-12-10 05:53:47.431550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.781 [2024-12-10 05:53:47.431581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 [2024-12-10 05:53:47.441363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.781 [2024-12-10 05:53:47.441426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.781 [2024-12-10 05:53:47.441444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.781 [2024-12-10 05:53:47.441454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.781 [2024-12-10 05:53:47.441463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.781 [2024-12-10 05:53:47.441483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 [2024-12-10 05:53:47.451387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.781 [2024-12-10 05:53:47.451462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.781 [2024-12-10 05:53:47.451475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.781 [2024-12-10 05:53:47.451481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.781 [2024-12-10 05:53:47.451487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.781 [2024-12-10 05:53:47.451502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 [2024-12-10 05:53:47.461456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.781 [2024-12-10 05:53:47.461517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.781 [2024-12-10 05:53:47.461534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.781 [2024-12-10 05:53:47.461540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.781 [2024-12-10 05:53:47.461546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.781 [2024-12-10 05:53:47.461560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 [2024-12-10 05:53:47.471467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.781 [2024-12-10 05:53:47.471536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.781 [2024-12-10 05:53:47.471548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.781 [2024-12-10 05:53:47.471555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.781 [2024-12-10 05:53:47.471561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.781 [2024-12-10 05:53:47.471575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 [2024-12-10 05:53:47.481487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.781 [2024-12-10 05:53:47.481546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.781 [2024-12-10 05:53:47.481559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.781 [2024-12-10 05:53:47.481566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.781 [2024-12-10 05:53:47.481572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.781 [2024-12-10 05:53:47.481586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.781 qpair failed and we were unable to recover it. 00:28:59.781 [2024-12-10 05:53:47.491518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.781 [2024-12-10 05:53:47.491586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.781 [2024-12-10 05:53:47.491599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.491605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.491611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.491626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.501538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.501589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.501601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.501611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.501617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.501631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.511565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.511620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.511632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.511638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.511644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.511658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.521579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.521630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.521642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.521648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.521654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.521668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.531651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.531706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.531718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.531725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.531731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.531745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.541662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.541713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.541726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.541733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.541738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.541753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.551670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.551732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.551745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.551751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.551757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.551772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.561684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.561740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.561752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.561759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.561764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.561779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.571710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.571765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.571778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.571785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.571790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.571804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.581658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.581722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.581736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.581742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.581748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.581763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.591777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.591836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.591850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.591857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.591863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.591877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.601794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.601849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.601862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.601868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.601874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.601889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.611824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.611874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.611887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.611894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.611899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.611913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.782 qpair failed and we were unable to recover it. 00:28:59.782 [2024-12-10 05:53:47.621920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.782 [2024-12-10 05:53:47.621996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.782 [2024-12-10 05:53:47.622008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.782 [2024-12-10 05:53:47.622014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.782 [2024-12-10 05:53:47.622020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.782 [2024-12-10 05:53:47.622034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.783 qpair failed and we were unable to recover it. 00:28:59.783 [2024-12-10 05:53:47.631886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.783 [2024-12-10 05:53:47.631939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.783 [2024-12-10 05:53:47.631952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.783 [2024-12-10 05:53:47.631962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.783 [2024-12-10 05:53:47.631968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.783 [2024-12-10 05:53:47.631982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.783 qpair failed and we were unable to recover it. 00:28:59.783 [2024-12-10 05:53:47.641957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.783 [2024-12-10 05:53:47.642029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.783 [2024-12-10 05:53:47.642041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.783 [2024-12-10 05:53:47.642048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.783 [2024-12-10 05:53:47.642053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.783 [2024-12-10 05:53:47.642068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.783 qpair failed and we were unable to recover it. 00:28:59.783 [2024-12-10 05:53:47.651944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.783 [2024-12-10 05:53:47.651999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.783 [2024-12-10 05:53:47.652013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.783 [2024-12-10 05:53:47.652019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.783 [2024-12-10 05:53:47.652025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:28:59.783 [2024-12-10 05:53:47.652040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.783 qpair failed and we were unable to recover it. 00:29:00.043 [2024-12-10 05:53:47.661963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.043 [2024-12-10 05:53:47.662019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.043 [2024-12-10 05:53:47.662033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.043 [2024-12-10 05:53:47.662040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.043 [2024-12-10 05:53:47.662046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.043 [2024-12-10 05:53:47.662060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.043 qpair failed and we were unable to recover it. 00:29:00.043 [2024-12-10 05:53:47.672002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.043 [2024-12-10 05:53:47.672056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.043 [2024-12-10 05:53:47.672069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.043 [2024-12-10 05:53:47.672075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.043 [2024-12-10 05:53:47.672081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.043 [2024-12-10 05:53:47.672099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.043 qpair failed and we were unable to recover it. 00:29:00.043 [2024-12-10 05:53:47.682025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.043 [2024-12-10 05:53:47.682076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.043 [2024-12-10 05:53:47.682089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.043 [2024-12-10 05:53:47.682096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.043 [2024-12-10 05:53:47.682101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.043 [2024-12-10 05:53:47.682115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.043 qpair failed and we were unable to recover it. 00:29:00.043 [2024-12-10 05:53:47.692043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.043 [2024-12-10 05:53:47.692096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.043 [2024-12-10 05:53:47.692110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.043 [2024-12-10 05:53:47.692116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.043 [2024-12-10 05:53:47.692122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.043 [2024-12-10 05:53:47.692136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.043 qpair failed and we were unable to recover it. 00:29:00.043 [2024-12-10 05:53:47.702075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.043 [2024-12-10 05:53:47.702128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.043 [2024-12-10 05:53:47.702141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.043 [2024-12-10 05:53:47.702147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.043 [2024-12-10 05:53:47.702153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.043 [2024-12-10 05:53:47.702170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.043 qpair failed and we were unable to recover it. 00:29:00.043 [2024-12-10 05:53:47.712110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.043 [2024-12-10 05:53:47.712169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.043 [2024-12-10 05:53:47.712183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.043 [2024-12-10 05:53:47.712189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.043 [2024-12-10 05:53:47.712195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.043 [2024-12-10 05:53:47.712209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.043 qpair failed and we were unable to recover it. 00:29:00.043 [2024-12-10 05:53:47.722119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.043 [2024-12-10 05:53:47.722179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.043 [2024-12-10 05:53:47.722192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.043 [2024-12-10 05:53:47.722198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.043 [2024-12-10 05:53:47.722204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.043 [2024-12-10 05:53:47.722218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.043 qpair failed and we were unable to recover it. 00:29:00.043 [2024-12-10 05:53:47.732146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.043 [2024-12-10 05:53:47.732203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.043 [2024-12-10 05:53:47.732215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.043 [2024-12-10 05:53:47.732221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.043 [2024-12-10 05:53:47.732227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.732242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.742202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.742262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.742274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.742281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.742286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.742301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.752219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.752273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.752285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.752291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.752297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.752310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.762239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.762296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.762312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.762319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.762324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.762339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.772261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.772312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.772325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.772331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.772337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.772352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.782290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.782340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.782352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.782358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.782364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.782378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.792368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.792422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.792434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.792440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.792445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.792460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.802355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.802415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.802427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.802434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.802443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.802457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.812387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.812438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.812450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.812456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.812462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.812476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.822421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.822470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.822483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.822489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.822495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.822510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.832459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.832513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.832525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.832532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.832537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.832551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.842488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.842547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.842559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.842565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.842570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.842584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.852498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.852548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.852561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.852567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.852573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.852588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.044 [2024-12-10 05:53:47.862538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.044 [2024-12-10 05:53:47.862592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.044 [2024-12-10 05:53:47.862604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.044 [2024-12-10 05:53:47.862610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.044 [2024-12-10 05:53:47.862616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.044 [2024-12-10 05:53:47.862629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.044 qpair failed and we were unable to recover it. 00:29:00.045 [2024-12-10 05:53:47.872596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.045 [2024-12-10 05:53:47.872653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.045 [2024-12-10 05:53:47.872666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.045 [2024-12-10 05:53:47.872672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.045 [2024-12-10 05:53:47.872678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.045 [2024-12-10 05:53:47.872692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.045 qpair failed and we were unable to recover it. 00:29:00.045 [2024-12-10 05:53:47.882591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.045 [2024-12-10 05:53:47.882643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.045 [2024-12-10 05:53:47.882655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.045 [2024-12-10 05:53:47.882661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.045 [2024-12-10 05:53:47.882667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.045 [2024-12-10 05:53:47.882682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.045 qpair failed and we were unable to recover it. 00:29:00.045 [2024-12-10 05:53:47.892628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.045 [2024-12-10 05:53:47.892683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.045 [2024-12-10 05:53:47.892701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.045 [2024-12-10 05:53:47.892708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.045 [2024-12-10 05:53:47.892714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.045 [2024-12-10 05:53:47.892728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.045 qpair failed and we were unable to recover it. 00:29:00.045 [2024-12-10 05:53:47.902699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.045 [2024-12-10 05:53:47.902803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.045 [2024-12-10 05:53:47.902816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.045 [2024-12-10 05:53:47.902822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.045 [2024-12-10 05:53:47.902828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.045 [2024-12-10 05:53:47.902842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.045 qpair failed and we were unable to recover it. 00:29:00.045 [2024-12-10 05:53:47.912738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.045 [2024-12-10 05:53:47.912796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.045 [2024-12-10 05:53:47.912808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.045 [2024-12-10 05:53:47.912814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.045 [2024-12-10 05:53:47.912820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.045 [2024-12-10 05:53:47.912835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.045 qpair failed and we were unable to recover it. 00:29:00.045 [2024-12-10 05:53:47.922718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.045 [2024-12-10 05:53:47.922773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.045 [2024-12-10 05:53:47.922785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.045 [2024-12-10 05:53:47.922792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.045 [2024-12-10 05:53:47.922797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.045 [2024-12-10 05:53:47.922811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.045 qpair failed and we were unable to recover it. 00:29:00.045 [2024-12-10 05:53:47.932771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.045 [2024-12-10 05:53:47.932859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.045 [2024-12-10 05:53:47.932871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.045 [2024-12-10 05:53:47.932877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.045 [2024-12-10 05:53:47.932886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.305 [2024-12-10 05:53:47.932900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.305 qpair failed and we were unable to recover it. 00:29:00.305 [2024-12-10 05:53:47.942765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.305 [2024-12-10 05:53:47.942815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.305 [2024-12-10 05:53:47.942827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.305 [2024-12-10 05:53:47.942833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.305 [2024-12-10 05:53:47.942839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.305 [2024-12-10 05:53:47.942854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.305 qpair failed and we were unable to recover it. 00:29:00.305 [2024-12-10 05:53:47.952798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.305 [2024-12-10 05:53:47.952854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.305 [2024-12-10 05:53:47.952866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.305 [2024-12-10 05:53:47.952873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.305 [2024-12-10 05:53:47.952879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.305 [2024-12-10 05:53:47.952893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.305 qpair failed and we were unable to recover it. 00:29:00.305 [2024-12-10 05:53:47.962872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.305 [2024-12-10 05:53:47.962927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.305 [2024-12-10 05:53:47.962939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.305 [2024-12-10 05:53:47.962946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.305 [2024-12-10 05:53:47.962951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.305 [2024-12-10 05:53:47.962966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.305 qpair failed and we were unable to recover it. 00:29:00.305 [2024-12-10 05:53:47.972805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.305 [2024-12-10 05:53:47.972853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.305 [2024-12-10 05:53:47.972865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.305 [2024-12-10 05:53:47.972871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.305 [2024-12-10 05:53:47.972876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.305 [2024-12-10 05:53:47.972891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.305 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:47.982883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:47.982939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:47.982951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:47.982958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:47.982964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:47.982978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:47.992924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:47.992979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:47.992992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:47.992999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:47.993005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:47.993019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.002968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.003029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.003041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:48.003047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:48.003053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:48.003067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.012965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.013023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.013036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:48.013043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:48.013049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:48.013063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.022997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.023060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.023077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:48.023084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:48.023090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:48.023104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.033034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.033090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.033103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:48.033109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:48.033115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:48.033130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.042997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.043062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.043076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:48.043082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:48.043087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:48.043103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.053066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.053157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.053174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:48.053181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:48.053187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:48.053202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.063039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.063090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.063103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:48.063113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:48.063119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:48.063134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.073156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.073238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.073251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:48.073257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:48.073263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:48.073278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.083176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.083235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.083249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:48.083255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:48.083261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:48.083277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.093206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.093262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.093275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:48.093282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:48.093288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:48.093303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.103247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.103304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.103318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.306 [2024-12-10 05:53:48.103324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.306 [2024-12-10 05:53:48.103331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.306 [2024-12-10 05:53:48.103347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.306 qpair failed and we were unable to recover it. 00:29:00.306 [2024-12-10 05:53:48.113273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.306 [2024-12-10 05:53:48.113330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.306 [2024-12-10 05:53:48.113343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.307 [2024-12-10 05:53:48.113350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.307 [2024-12-10 05:53:48.113356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.307 [2024-12-10 05:53:48.113371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.307 qpair failed and we were unable to recover it. 00:29:00.307 [2024-12-10 05:53:48.123220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.307 [2024-12-10 05:53:48.123274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.307 [2024-12-10 05:53:48.123287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.307 [2024-12-10 05:53:48.123294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.307 [2024-12-10 05:53:48.123301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.307 [2024-12-10 05:53:48.123317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.307 qpair failed and we were unable to recover it. 00:29:00.307 [2024-12-10 05:53:48.133350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.307 [2024-12-10 05:53:48.133409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.307 [2024-12-10 05:53:48.133422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.307 [2024-12-10 05:53:48.133429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.307 [2024-12-10 05:53:48.133435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.307 [2024-12-10 05:53:48.133449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.307 qpair failed and we were unable to recover it. 00:29:00.307 [2024-12-10 05:53:48.143299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.307 [2024-12-10 05:53:48.143393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.307 [2024-12-10 05:53:48.143407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.307 [2024-12-10 05:53:48.143413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.307 [2024-12-10 05:53:48.143419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.307 [2024-12-10 05:53:48.143434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.307 qpair failed and we were unable to recover it. 00:29:00.307 [2024-12-10 05:53:48.153365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.307 [2024-12-10 05:53:48.153445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.307 [2024-12-10 05:53:48.153459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.307 [2024-12-10 05:53:48.153466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.307 [2024-12-10 05:53:48.153472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.307 [2024-12-10 05:53:48.153486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.307 qpair failed and we were unable to recover it. 00:29:00.307 [2024-12-10 05:53:48.163347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.307 [2024-12-10 05:53:48.163397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.307 [2024-12-10 05:53:48.163410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.307 [2024-12-10 05:53:48.163416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.307 [2024-12-10 05:53:48.163423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.307 [2024-12-10 05:53:48.163438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.307 qpair failed and we were unable to recover it. 00:29:00.307 [2024-12-10 05:53:48.173437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.307 [2024-12-10 05:53:48.173492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.307 [2024-12-10 05:53:48.173505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.307 [2024-12-10 05:53:48.173511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.307 [2024-12-10 05:53:48.173518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.307 [2024-12-10 05:53:48.173532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.307 qpair failed and we were unable to recover it. 00:29:00.307 [2024-12-10 05:53:48.183456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.307 [2024-12-10 05:53:48.183510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.307 [2024-12-10 05:53:48.183522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.307 [2024-12-10 05:53:48.183529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.307 [2024-12-10 05:53:48.183536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.307 [2024-12-10 05:53:48.183551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.307 qpair failed and we were unable to recover it. 00:29:00.307 [2024-12-10 05:53:48.193434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.307 [2024-12-10 05:53:48.193492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.307 [2024-12-10 05:53:48.193504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.307 [2024-12-10 05:53:48.193515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.307 [2024-12-10 05:53:48.193522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.307 [2024-12-10 05:53:48.193536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.307 qpair failed and we were unable to recover it. 00:29:00.567 [2024-12-10 05:53:48.203562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.567 [2024-12-10 05:53:48.203622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.567 [2024-12-10 05:53:48.203635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.567 [2024-12-10 05:53:48.203641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.567 [2024-12-10 05:53:48.203648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.567 [2024-12-10 05:53:48.203663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.567 qpair failed and we were unable to recover it. 00:29:00.567 [2024-12-10 05:53:48.213485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.213578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.213591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.213598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.213604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.213618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.223690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.223807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.223819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.223826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.223833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.223848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.233592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.233651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.233664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.233671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.233677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.233696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.243643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.243702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.243715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.243722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.243728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.243743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.253655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.253711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.253725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.253731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.253737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.253752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.263721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.263778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.263791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.263798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.263804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.263819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.273730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.273790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.273803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.273810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.273816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.273830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.283698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.283752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.283766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.283772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.283779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.283794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.293758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.293836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.293850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.293858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.293864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.293878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.303874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.303928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.303940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.303947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.303954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.303969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.313891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.313949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.313963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.313970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.313976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.313992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.323920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.323975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.323991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.323998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.324004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.324019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.333942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.568 [2024-12-10 05:53:48.333997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.568 [2024-12-10 05:53:48.334011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.568 [2024-12-10 05:53:48.334017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.568 [2024-12-10 05:53:48.334023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.568 [2024-12-10 05:53:48.334038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.568 qpair failed and we were unable to recover it. 00:29:00.568 [2024-12-10 05:53:48.343921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.343971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.343985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.343992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.343999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.344014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.569 [2024-12-10 05:53:48.353972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.354057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.354070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.354078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.354084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.354098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.569 [2024-12-10 05:53:48.364026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.364082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.364096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.364103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.364112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.364128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.569 [2024-12-10 05:53:48.374019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.374075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.374088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.374095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.374102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.374116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.569 [2024-12-10 05:53:48.383984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.384055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.384068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.384075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.384081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.384097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.569 [2024-12-10 05:53:48.394136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.394201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.394215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.394222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.394228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.394243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.569 [2024-12-10 05:53:48.404132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.404212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.404226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.404233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.404239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.404253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.569 [2024-12-10 05:53:48.414129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.414201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.414215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.414223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.414229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.414244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.569 [2024-12-10 05:53:48.424176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.424231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.424243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.424250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.424256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.424271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.569 [2024-12-10 05:53:48.434202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.434260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.434273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.434279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.434286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.434301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.569 [2024-12-10 05:53:48.444230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.444286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.444301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.444308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.444314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.444330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.569 [2024-12-10 05:53:48.454257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.569 [2024-12-10 05:53:48.454312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.569 [2024-12-10 05:53:48.454335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.569 [2024-12-10 05:53:48.454342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.569 [2024-12-10 05:53:48.454348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.569 [2024-12-10 05:53:48.454363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.569 qpair failed and we were unable to recover it. 00:29:00.830 [2024-12-10 05:53:48.464270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.830 [2024-12-10 05:53:48.464325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.830 [2024-12-10 05:53:48.464338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.830 [2024-12-10 05:53:48.464344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.830 [2024-12-10 05:53:48.464351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.830 [2024-12-10 05:53:48.464366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.830 qpair failed and we were unable to recover it. 00:29:00.830 [2024-12-10 05:53:48.474326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.830 [2024-12-10 05:53:48.474384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.830 [2024-12-10 05:53:48.474398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.830 [2024-12-10 05:53:48.474404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.830 [2024-12-10 05:53:48.474411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.830 [2024-12-10 05:53:48.474425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.830 qpair failed and we were unable to recover it. 00:29:00.830 [2024-12-10 05:53:48.484407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.830 [2024-12-10 05:53:48.484493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.830 [2024-12-10 05:53:48.484506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.830 [2024-12-10 05:53:48.484513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.830 [2024-12-10 05:53:48.484519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.830 [2024-12-10 05:53:48.484534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.830 qpair failed and we were unable to recover it. 00:29:00.830 [2024-12-10 05:53:48.494367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.830 [2024-12-10 05:53:48.494441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.830 [2024-12-10 05:53:48.494454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.830 [2024-12-10 05:53:48.494461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.830 [2024-12-10 05:53:48.494470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.830 [2024-12-10 05:53:48.494485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.830 qpair failed and we were unable to recover it. 00:29:00.830 [2024-12-10 05:53:48.504384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.830 [2024-12-10 05:53:48.504434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.830 [2024-12-10 05:53:48.504447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.830 [2024-12-10 05:53:48.504454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.830 [2024-12-10 05:53:48.504460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.830 [2024-12-10 05:53:48.504475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.830 qpair failed and we were unable to recover it. 00:29:00.830 [2024-12-10 05:53:48.514431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.830 [2024-12-10 05:53:48.514489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.830 [2024-12-10 05:53:48.514502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.830 [2024-12-10 05:53:48.514509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.830 [2024-12-10 05:53:48.514516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.830 [2024-12-10 05:53:48.514530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.830 qpair failed and we were unable to recover it. 00:29:00.830 [2024-12-10 05:53:48.524468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.830 [2024-12-10 05:53:48.524523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.830 [2024-12-10 05:53:48.524536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.830 [2024-12-10 05:53:48.524543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.830 [2024-12-10 05:53:48.524549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.830 [2024-12-10 05:53:48.524564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.830 qpair failed and we were unable to recover it. 00:29:00.830 [2024-12-10 05:53:48.534481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.830 [2024-12-10 05:53:48.534538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.830 [2024-12-10 05:53:48.534551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.830 [2024-12-10 05:53:48.534558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.830 [2024-12-10 05:53:48.534564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.830 [2024-12-10 05:53:48.534578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.830 qpair failed and we were unable to recover it. 00:29:00.830 [2024-12-10 05:53:48.544463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.544520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.544533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.544541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.544547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.544562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.554568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.554632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.554645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.554652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.554658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.554673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.564540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.564593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.564607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.564614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.564620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.564635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.574592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.574644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.574657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.574664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.574670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.574685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.584618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.584694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.584712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.584719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.584726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.584740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.594695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.594749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.594763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.594770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.594776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.594791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.604683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.604742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.604755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.604761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.604768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.604782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.614708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.614760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.614773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.614780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.614787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.614802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.624732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.624836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.624849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.624860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.624866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.624882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.634788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.634841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.634854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.634861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.634867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.634882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.644819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.644876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.644890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.644897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.644903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.644918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.654811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.654867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.654880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.654887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.654893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.654908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.664853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.664905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.664917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.664923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.831 [2024-12-10 05:53:48.664930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.831 [2024-12-10 05:53:48.664948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.831 qpair failed and we were unable to recover it. 00:29:00.831 [2024-12-10 05:53:48.674905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.831 [2024-12-10 05:53:48.674971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.831 [2024-12-10 05:53:48.674983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.831 [2024-12-10 05:53:48.674991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.832 [2024-12-10 05:53:48.674997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.832 [2024-12-10 05:53:48.675011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.832 qpair failed and we were unable to recover it. 00:29:00.832 [2024-12-10 05:53:48.684894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.832 [2024-12-10 05:53:48.684948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.832 [2024-12-10 05:53:48.684961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.832 [2024-12-10 05:53:48.684968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.832 [2024-12-10 05:53:48.684975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.832 [2024-12-10 05:53:48.684990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.832 qpair failed and we were unable to recover it. 00:29:00.832 [2024-12-10 05:53:48.694969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.832 [2024-12-10 05:53:48.695025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.832 [2024-12-10 05:53:48.695037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.832 [2024-12-10 05:53:48.695044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.832 [2024-12-10 05:53:48.695050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.832 [2024-12-10 05:53:48.695064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.832 qpair failed and we were unable to recover it. 00:29:00.832 [2024-12-10 05:53:48.704946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.832 [2024-12-10 05:53:48.705000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.832 [2024-12-10 05:53:48.705013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.832 [2024-12-10 05:53:48.705020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.832 [2024-12-10 05:53:48.705026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.832 [2024-12-10 05:53:48.705041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.832 qpair failed and we were unable to recover it. 00:29:00.832 [2024-12-10 05:53:48.714989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.832 [2024-12-10 05:53:48.715052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.832 [2024-12-10 05:53:48.715066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.832 [2024-12-10 05:53:48.715072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.832 [2024-12-10 05:53:48.715079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:00.832 [2024-12-10 05:53:48.715093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.832 qpair failed and we were unable to recover it. 00:29:01.092 [2024-12-10 05:53:48.725087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-12-10 05:53:48.725144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-12-10 05:53:48.725157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-12-10 05:53:48.725164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.092 [2024-12-10 05:53:48.725174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.092 [2024-12-10 05:53:48.725189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.092 qpair failed and we were unable to recover it. 00:29:01.092 [2024-12-10 05:53:48.735029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-12-10 05:53:48.735083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-12-10 05:53:48.735096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-12-10 05:53:48.735102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.735109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.735124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.745102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.745159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.745177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.745184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.745190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.745205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.755105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.755181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.755195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.755204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.755211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.755225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.765129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.765184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.765197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.765204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.765210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.765225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.775135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.775185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.775198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.775205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.775210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.775226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.785171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.785222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.785235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.785242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.785248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.785263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.795233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.795286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.795298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.795305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.795311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.795329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.805230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.805289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.805302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.805309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.805316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.805330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.815291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.815355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.815367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.815374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.815381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.815395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.825278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.825334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.825347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.825354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.825361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.825375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.835402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.835457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.835470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.835477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.835484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.835498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.845373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.845474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.845487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.845494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.845500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.845515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.855374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.855442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-12-10 05:53:48.855455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-12-10 05:53:48.855461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-12-10 05:53:48.855467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.093 [2024-12-10 05:53:48.855482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-12-10 05:53:48.865403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-12-10 05:53:48.865458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.865470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.865477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.865483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.865498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-12-10 05:53:48.875447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-12-10 05:53:48.875504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.875517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.875523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.875530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.875544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-12-10 05:53:48.885459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-12-10 05:53:48.885513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.885529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.885537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.885544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.885558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-12-10 05:53:48.895474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-12-10 05:53:48.895526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.895538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.895545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.895551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.895565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-12-10 05:53:48.905451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-12-10 05:53:48.905541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.905554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.905561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.905567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.905582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-12-10 05:53:48.915543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-12-10 05:53:48.915599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.915612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.915618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.915624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.915639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-12-10 05:53:48.925566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-12-10 05:53:48.925623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.925636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.925642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.925652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.925666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-12-10 05:53:48.935597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-12-10 05:53:48.935655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.935668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.935674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.935681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.935696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-12-10 05:53:48.945613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-12-10 05:53:48.945665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.945677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.945684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.945691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.945706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-12-10 05:53:48.955646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-12-10 05:53:48.955703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.955716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.955722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.955729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.955744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-12-10 05:53:48.965673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-12-10 05:53:48.965729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.965742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.965748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.965754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.965769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-12-10 05:53:48.975743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-12-10 05:53:48.975842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-12-10 05:53:48.975855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-12-10 05:53:48.975862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-12-10 05:53:48.975868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.094 [2024-12-10 05:53:48.975882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:48.985729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:48.985785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:48.985798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:48.985805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-12-10 05:53:48.985811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.355 [2024-12-10 05:53:48.985825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:48.995749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:48.995808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:48.995820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:48.995827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-12-10 05:53:48.995833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.355 [2024-12-10 05:53:48.995849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:49.005805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:49.005863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:49.005876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:49.005883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-12-10 05:53:49.005889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.355 [2024-12-10 05:53:49.005904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:49.015816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:49.015869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:49.015886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:49.015892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-12-10 05:53:49.015899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.355 [2024-12-10 05:53:49.015913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:49.025837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:49.025897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:49.025910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:49.025917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-12-10 05:53:49.025923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.355 [2024-12-10 05:53:49.025938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:49.035881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:49.035941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:49.035953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:49.035960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-12-10 05:53:49.035966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.355 [2024-12-10 05:53:49.035980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:49.045941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:49.046000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:49.046013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:49.046020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-12-10 05:53:49.046027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.355 [2024-12-10 05:53:49.046042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:49.055919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:49.055974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:49.055987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:49.055994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-12-10 05:53:49.056003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.355 [2024-12-10 05:53:49.056018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:49.065987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:49.066048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:49.066060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:49.066067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-12-10 05:53:49.066073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.355 [2024-12-10 05:53:49.066088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:49.076042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:49.076143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:49.076156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:49.076164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-12-10 05:53:49.076176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.355 [2024-12-10 05:53:49.076190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:49.085950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:49.086002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:49.086015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:49.086022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-12-10 05:53:49.086028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.355 [2024-12-10 05:53:49.086043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-12-10 05:53:49.096046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-12-10 05:53:49.096098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-12-10 05:53:49.096112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-12-10 05:53:49.096118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.096125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.096139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.106122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.106186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.106201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.106210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.106217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.106233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.116107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.116206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.116219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.116226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.116232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.116248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.126128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.126183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.126196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.126203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.126210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.126224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.136150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.136209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.136222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.136229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.136235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.136250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.146193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.146247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.146263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.146270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.146276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.146291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.156229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.156291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.156304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.156311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.156318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.156333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.166284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.166343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.166355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.166362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.166369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.166384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.176217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.176272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.176287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.176294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.176300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.176316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.186297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.186352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.186366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.186377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.186384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.186399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.196270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.196327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.196340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.196347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.196353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.196368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.206358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.206411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.206424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.206431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.206437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.206452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.216384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.216438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.216451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.216458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.216464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.216478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.356 [2024-12-10 05:53:49.226398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.356 [2024-12-10 05:53:49.226454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.356 [2024-12-10 05:53:49.226467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.356 [2024-12-10 05:53:49.226475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.356 [2024-12-10 05:53:49.226481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.356 [2024-12-10 05:53:49.226499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.356 qpair failed and we were unable to recover it. 00:29:01.357 [2024-12-10 05:53:49.236507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.357 [2024-12-10 05:53:49.236570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.357 [2024-12-10 05:53:49.236583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.357 [2024-12-10 05:53:49.236590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.357 [2024-12-10 05:53:49.236597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.357 [2024-12-10 05:53:49.236611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.357 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-10 05:53:49.246473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.617 [2024-12-10 05:53:49.246525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.617 [2024-12-10 05:53:49.246538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.617 [2024-12-10 05:53:49.246545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.617 [2024-12-10 05:53:49.246554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.617 [2024-12-10 05:53:49.246569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-10 05:53:49.256474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.617 [2024-12-10 05:53:49.256543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.617 [2024-12-10 05:53:49.256556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.617 [2024-12-10 05:53:49.256563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.617 [2024-12-10 05:53:49.256569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.617 [2024-12-10 05:53:49.256584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-10 05:53:49.266516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.617 [2024-12-10 05:53:49.266607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.617 [2024-12-10 05:53:49.266620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.617 [2024-12-10 05:53:49.266627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.617 [2024-12-10 05:53:49.266633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.617 [2024-12-10 05:53:49.266648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-10 05:53:49.276608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.617 [2024-12-10 05:53:49.276690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.617 [2024-12-10 05:53:49.276703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.617 [2024-12-10 05:53:49.276710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.617 [2024-12-10 05:53:49.276716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.617 [2024-12-10 05:53:49.276730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-10 05:53:49.286633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.617 [2024-12-10 05:53:49.286685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.617 [2024-12-10 05:53:49.286698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.617 [2024-12-10 05:53:49.286705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.617 [2024-12-10 05:53:49.286711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.617 [2024-12-10 05:53:49.286726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-10 05:53:49.296618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.617 [2024-12-10 05:53:49.296694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.617 [2024-12-10 05:53:49.296708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.617 [2024-12-10 05:53:49.296715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.617 [2024-12-10 05:53:49.296721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.617 [2024-12-10 05:53:49.296736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-10 05:53:49.306645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.617 [2024-12-10 05:53:49.306696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.617 [2024-12-10 05:53:49.306708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.617 [2024-12-10 05:53:49.306715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.617 [2024-12-10 05:53:49.306722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.617 [2024-12-10 05:53:49.306737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-10 05:53:49.316700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.617 [2024-12-10 05:53:49.316780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.617 [2024-12-10 05:53:49.316793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.617 [2024-12-10 05:53:49.316805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.617 [2024-12-10 05:53:49.316811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.617 [2024-12-10 05:53:49.316826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-10 05:53:49.326741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.617 [2024-12-10 05:53:49.326800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.617 [2024-12-10 05:53:49.326813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.617 [2024-12-10 05:53:49.326821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.617 [2024-12-10 05:53:49.326826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.617 [2024-12-10 05:53:49.326841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-10 05:53:49.336716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.336781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.336813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.336821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.336827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.336854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.346761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.346843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.346858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.346864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.346870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.346886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.356733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.356798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.356812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.356819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.356825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.356843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.366821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.366879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.366892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.366899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.366906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.366920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.376841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.376893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.376906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.376913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.376919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.376933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.386858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.386947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.386960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.386967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.386973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.386988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.396904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.396957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.396970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.396977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.396983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.396998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.406948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.407003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.407016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.407023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.407030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.407045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.416963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.417022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.417035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.417042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.417048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.417063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.426923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.426977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.426990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.426997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.427003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.427018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.437068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.437130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.437143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.437150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.437156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.437176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.447047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.447102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.447121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.447129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.447135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.447149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.457109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.457214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.618 [2024-12-10 05:53:49.457227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.618 [2024-12-10 05:53:49.457235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.618 [2024-12-10 05:53:49.457243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.618 [2024-12-10 05:53:49.457259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-10 05:53:49.467016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.618 [2024-12-10 05:53:49.467072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.619 [2024-12-10 05:53:49.467085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.619 [2024-12-10 05:53:49.467092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.619 [2024-12-10 05:53:49.467099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.619 [2024-12-10 05:53:49.467113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-10 05:53:49.477071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.619 [2024-12-10 05:53:49.477127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.619 [2024-12-10 05:53:49.477141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.619 [2024-12-10 05:53:49.477148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.619 [2024-12-10 05:53:49.477154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.619 [2024-12-10 05:53:49.477173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-10 05:53:49.487184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.619 [2024-12-10 05:53:49.487263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.619 [2024-12-10 05:53:49.487276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.619 [2024-12-10 05:53:49.487283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.619 [2024-12-10 05:53:49.487292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.619 [2024-12-10 05:53:49.487308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-10 05:53:49.497102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.619 [2024-12-10 05:53:49.497157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.619 [2024-12-10 05:53:49.497173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.619 [2024-12-10 05:53:49.497181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.619 [2024-12-10 05:53:49.497187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.619 [2024-12-10 05:53:49.497202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.879 [2024-12-10 05:53:49.507207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.879 [2024-12-10 05:53:49.507261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.879 [2024-12-10 05:53:49.507274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.879 [2024-12-10 05:53:49.507281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.879 [2024-12-10 05:53:49.507287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.879 [2024-12-10 05:53:49.507302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.879 qpair failed and we were unable to recover it. 00:29:01.879 [2024-12-10 05:53:49.517241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.879 [2024-12-10 05:53:49.517294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.879 [2024-12-10 05:53:49.517307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.879 [2024-12-10 05:53:49.517314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.879 [2024-12-10 05:53:49.517320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.879 [2024-12-10 05:53:49.517335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.879 qpair failed and we were unable to recover it. 00:29:01.879 [2024-12-10 05:53:49.527202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.879 [2024-12-10 05:53:49.527258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.879 [2024-12-10 05:53:49.527270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.879 [2024-12-10 05:53:49.527277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.879 [2024-12-10 05:53:49.527283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.879 [2024-12-10 05:53:49.527298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.879 qpair failed and we were unable to recover it. 00:29:01.879 [2024-12-10 05:53:49.537209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.879 [2024-12-10 05:53:49.537267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.879 [2024-12-10 05:53:49.537280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.879 [2024-12-10 05:53:49.537287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.879 [2024-12-10 05:53:49.537293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.879 [2024-12-10 05:53:49.537307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.879 qpair failed and we were unable to recover it. 00:29:01.879 [2024-12-10 05:53:49.547319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.879 [2024-12-10 05:53:49.547374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.879 [2024-12-10 05:53:49.547386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.879 [2024-12-10 05:53:49.547393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.879 [2024-12-10 05:53:49.547399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.879 [2024-12-10 05:53:49.547413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.879 qpair failed and we were unable to recover it. 00:29:01.879 [2024-12-10 05:53:49.557307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.879 [2024-12-10 05:53:49.557367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.879 [2024-12-10 05:53:49.557380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.879 [2024-12-10 05:53:49.557387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.879 [2024-12-10 05:53:49.557393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.879 [2024-12-10 05:53:49.557407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.879 qpair failed and we were unable to recover it. 00:29:01.879 [2024-12-10 05:53:49.567308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.879 [2024-12-10 05:53:49.567365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.879 [2024-12-10 05:53:49.567378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.879 [2024-12-10 05:53:49.567384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.879 [2024-12-10 05:53:49.567390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.879 [2024-12-10 05:53:49.567405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.879 qpair failed and we were unable to recover it. 00:29:01.879 [2024-12-10 05:53:49.577367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.879 [2024-12-10 05:53:49.577423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.879 [2024-12-10 05:53:49.577438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.879 [2024-12-10 05:53:49.577445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.879 [2024-12-10 05:53:49.577451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.879 [2024-12-10 05:53:49.577465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.879 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.587439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.587494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.587507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.587513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.587520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.587534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.597454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.597531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.597544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.597551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.597557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.597571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.607418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.607499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.607512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.607519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.607525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.607539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.617537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.617636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.617650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.617657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.617667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.617682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.627553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.627626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.627641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.627648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.627656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.627671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.637521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.637578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.637591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.637598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.637605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.637620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.647583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.647634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.647647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.647654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.647660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.647675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.657570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.657623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.657636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.657643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.657649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.657664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.667663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.667719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.667732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.667740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.667747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.667762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.677699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.677800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.677815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.677822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.677829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.677845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.687735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.687791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.687804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.687811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.687817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.687832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.697680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.697730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.697743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.697750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.697757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.697772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.707756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.707812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.707829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.880 [2024-12-10 05:53:49.707836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.880 [2024-12-10 05:53:49.707842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.880 [2024-12-10 05:53:49.707856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.880 qpair failed and we were unable to recover it. 00:29:01.880 [2024-12-10 05:53:49.717873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.880 [2024-12-10 05:53:49.717934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.880 [2024-12-10 05:53:49.717948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.881 [2024-12-10 05:53:49.717955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.881 [2024-12-10 05:53:49.717961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.881 [2024-12-10 05:53:49.717976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.881 qpair failed and we were unable to recover it. 00:29:01.881 [2024-12-10 05:53:49.727840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.881 [2024-12-10 05:53:49.727895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.881 [2024-12-10 05:53:49.727908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.881 [2024-12-10 05:53:49.727915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.881 [2024-12-10 05:53:49.727921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.881 [2024-12-10 05:53:49.727936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.881 qpair failed and we were unable to recover it. 00:29:01.881 [2024-12-10 05:53:49.737855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.881 [2024-12-10 05:53:49.737913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.881 [2024-12-10 05:53:49.737926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.881 [2024-12-10 05:53:49.737933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.881 [2024-12-10 05:53:49.737940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.881 [2024-12-10 05:53:49.737954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.881 qpair failed and we were unable to recover it. 00:29:01.881 [2024-12-10 05:53:49.747900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.881 [2024-12-10 05:53:49.747957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.881 [2024-12-10 05:53:49.747970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.881 [2024-12-10 05:53:49.747980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.881 [2024-12-10 05:53:49.747987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.881 [2024-12-10 05:53:49.748001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.881 qpair failed and we were unable to recover it. 00:29:01.881 [2024-12-10 05:53:49.757959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.881 [2024-12-10 05:53:49.758027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.881 [2024-12-10 05:53:49.758040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.881 [2024-12-10 05:53:49.758047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.881 [2024-12-10 05:53:49.758053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.881 [2024-12-10 05:53:49.758069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.881 qpair failed and we were unable to recover it. 00:29:01.881 [2024-12-10 05:53:49.767963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.881 [2024-12-10 05:53:49.768018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.881 [2024-12-10 05:53:49.768031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.881 [2024-12-10 05:53:49.768038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.881 [2024-12-10 05:53:49.768044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:01.881 [2024-12-10 05:53:49.768059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.881 qpair failed and we were unable to recover it. 00:29:02.142 [2024-12-10 05:53:49.777910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.142 [2024-12-10 05:53:49.777967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.142 [2024-12-10 05:53:49.777980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.142 [2024-12-10 05:53:49.777987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.142 [2024-12-10 05:53:49.777993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.142 [2024-12-10 05:53:49.778008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.142 qpair failed and we were unable to recover it. 00:29:02.142 [2024-12-10 05:53:49.788009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.142 [2024-12-10 05:53:49.788111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.142 [2024-12-10 05:53:49.788125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.142 [2024-12-10 05:53:49.788131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.142 [2024-12-10 05:53:49.788138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.142 [2024-12-10 05:53:49.788156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.142 qpair failed and we were unable to recover it. 00:29:02.142 [2024-12-10 05:53:49.798054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.142 [2024-12-10 05:53:49.798107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.142 [2024-12-10 05:53:49.798121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.142 [2024-12-10 05:53:49.798127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.142 [2024-12-10 05:53:49.798133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.142 [2024-12-10 05:53:49.798148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.142 qpair failed and we were unable to recover it. 00:29:02.142 [2024-12-10 05:53:49.808124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.142 [2024-12-10 05:53:49.808199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.142 [2024-12-10 05:53:49.808213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.142 [2024-12-10 05:53:49.808220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.142 [2024-12-10 05:53:49.808226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.142 [2024-12-10 05:53:49.808240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.142 qpair failed and we were unable to recover it. 00:29:02.142 [2024-12-10 05:53:49.818103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.142 [2024-12-10 05:53:49.818157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.142 [2024-12-10 05:53:49.818174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.142 [2024-12-10 05:53:49.818181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.142 [2024-12-10 05:53:49.818187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.142 [2024-12-10 05:53:49.818202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.142 qpair failed and we were unable to recover it. 00:29:02.142 [2024-12-10 05:53:49.828125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.142 [2024-12-10 05:53:49.828183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.142 [2024-12-10 05:53:49.828195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.142 [2024-12-10 05:53:49.828202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.142 [2024-12-10 05:53:49.828208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.142 [2024-12-10 05:53:49.828223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.142 qpair failed and we were unable to recover it. 00:29:02.142 [2024-12-10 05:53:49.838212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.142 [2024-12-10 05:53:49.838322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.142 [2024-12-10 05:53:49.838336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.142 [2024-12-10 05:53:49.838343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.142 [2024-12-10 05:53:49.838349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.142 [2024-12-10 05:53:49.838365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.142 qpair failed and we were unable to recover it. 00:29:02.142 [2024-12-10 05:53:49.848203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.142 [2024-12-10 05:53:49.848256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.142 [2024-12-10 05:53:49.848269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.142 [2024-12-10 05:53:49.848275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.142 [2024-12-10 05:53:49.848282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.142 [2024-12-10 05:53:49.848298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.142 qpair failed and we were unable to recover it. 00:29:02.142 [2024-12-10 05:53:49.858211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.142 [2024-12-10 05:53:49.858269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.142 [2024-12-10 05:53:49.858282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.142 [2024-12-10 05:53:49.858289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.142 [2024-12-10 05:53:49.858295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.142 [2024-12-10 05:53:49.858310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.142 qpair failed and we were unable to recover it. 00:29:02.142 [2024-12-10 05:53:49.868238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.142 [2024-12-10 05:53:49.868300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.868313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.868320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.868327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.868341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.878298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.878368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.878381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.878391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.878397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.878412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.888311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.888363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.888376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.888383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.888390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.888405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.898299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.898393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.898405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.898412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.898418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.898432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.908392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.908460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.908473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.908480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.908486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.908502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.918417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.918475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.918487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.918494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.918500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.918519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.928436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.928488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.928504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.928512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.928518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.928534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.938464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.938548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.938562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.938569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.938576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.938590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.948489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.948574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.948587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.948593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.948599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.948614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.958524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.958579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.958592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.958598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.958605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.958619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.968538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.968596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.968608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.968615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.968622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.968636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.978495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.978559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.978571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.978579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.978585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.978599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.988589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.988661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.988674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.988681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.143 [2024-12-10 05:53:49.988687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.143 [2024-12-10 05:53:49.988701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.143 qpair failed and we were unable to recover it. 00:29:02.143 [2024-12-10 05:53:49.998621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.143 [2024-12-10 05:53:49.998677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.143 [2024-12-10 05:53:49.998689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.143 [2024-12-10 05:53:49.998696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.144 [2024-12-10 05:53:49.998702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.144 [2024-12-10 05:53:49.998717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.144 qpair failed and we were unable to recover it. 00:29:02.144 [2024-12-10 05:53:50.008623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.144 [2024-12-10 05:53:50.008713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.144 [2024-12-10 05:53:50.008736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.144 [2024-12-10 05:53:50.008744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.144 [2024-12-10 05:53:50.008750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.144 [2024-12-10 05:53:50.008767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.144 qpair failed and we were unable to recover it. 00:29:02.144 [2024-12-10 05:53:50.018680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.144 [2024-12-10 05:53:50.018737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.144 [2024-12-10 05:53:50.018751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.144 [2024-12-10 05:53:50.018759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.144 [2024-12-10 05:53:50.018766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.144 [2024-12-10 05:53:50.018781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.144 qpair failed and we were unable to recover it. 00:29:02.144 [2024-12-10 05:53:50.028766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.144 [2024-12-10 05:53:50.028825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.144 [2024-12-10 05:53:50.028843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.144 [2024-12-10 05:53:50.028851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.144 [2024-12-10 05:53:50.028858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.144 [2024-12-10 05:53:50.028877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.144 qpair failed and we were unable to recover it. 00:29:02.404 [2024-12-10 05:53:50.038750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.404 [2024-12-10 05:53:50.038806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.404 [2024-12-10 05:53:50.038820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.404 [2024-12-10 05:53:50.038827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.404 [2024-12-10 05:53:50.038834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.404 [2024-12-10 05:53:50.038849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-12-10 05:53:50.048705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.404 [2024-12-10 05:53:50.048766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.404 [2024-12-10 05:53:50.048782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.404 [2024-12-10 05:53:50.048789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.404 [2024-12-10 05:53:50.048800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.404 [2024-12-10 05:53:50.048817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-12-10 05:53:50.058759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.404 [2024-12-10 05:53:50.058821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.404 [2024-12-10 05:53:50.058845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.404 [2024-12-10 05:53:50.058856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.404 [2024-12-10 05:53:50.058867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.404 [2024-12-10 05:53:50.058892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-12-10 05:53:50.068837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.404 [2024-12-10 05:53:50.068893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.404 [2024-12-10 05:53:50.068909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.404 [2024-12-10 05:53:50.068916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.404 [2024-12-10 05:53:50.068923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.404 [2024-12-10 05:53:50.068939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-12-10 05:53:50.078909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.404 [2024-12-10 05:53:50.078965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.404 [2024-12-10 05:53:50.078979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.404 [2024-12-10 05:53:50.078986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.404 [2024-12-10 05:53:50.078992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.404 [2024-12-10 05:53:50.079007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-12-10 05:53:50.088918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.404 [2024-12-10 05:53:50.088976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.404 [2024-12-10 05:53:50.088990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.404 [2024-12-10 05:53:50.088997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.404 [2024-12-10 05:53:50.089003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.404 [2024-12-10 05:53:50.089018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-12-10 05:53:50.098898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.404 [2024-12-10 05:53:50.098950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.404 [2024-12-10 05:53:50.098963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.404 [2024-12-10 05:53:50.098970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.404 [2024-12-10 05:53:50.098977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.404 [2024-12-10 05:53:50.098992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-12-10 05:53:50.108941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.404 [2024-12-10 05:53:50.109031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.404 [2024-12-10 05:53:50.109046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.404 [2024-12-10 05:53:50.109054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.404 [2024-12-10 05:53:50.109061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.404 [2024-12-10 05:53:50.109078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-12-10 05:53:50.118898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.404 [2024-12-10 05:53:50.118956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.404 [2024-12-10 05:53:50.118969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.404 [2024-12-10 05:53:50.118976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.404 [2024-12-10 05:53:50.118982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.118998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.129011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.129066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.129079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.129086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.129092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.129108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.139025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.139081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.139106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.139113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.139119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.139133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.149078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.149132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.149145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.149152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.149159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.149178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.159080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.159141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.159153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.159161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.159172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.159187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.169113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.169173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.169187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.169194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.169200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.169215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.179156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.179223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.179237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.179243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.179253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.179268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.189155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.189212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.189225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.189233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.189239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.189254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.199147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.199239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.199252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.199259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.199265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.199280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.209232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.209288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.209302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.209309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.209315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.209331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.219293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.219346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.219360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.219366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.219373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.219388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.229288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.229345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.229358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.229366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.229372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.229386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.239340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.239400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.239413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.239419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.239426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.239441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.249362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.405 [2024-12-10 05:53:50.249416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.405 [2024-12-10 05:53:50.249429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.405 [2024-12-10 05:53:50.249436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.405 [2024-12-10 05:53:50.249443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.405 [2024-12-10 05:53:50.249458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-12-10 05:53:50.259382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.406 [2024-12-10 05:53:50.259473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.406 [2024-12-10 05:53:50.259487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.406 [2024-12-10 05:53:50.259494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.406 [2024-12-10 05:53:50.259500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.406 [2024-12-10 05:53:50.259515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-12-10 05:53:50.269342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.406 [2024-12-10 05:53:50.269402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.406 [2024-12-10 05:53:50.269416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.406 [2024-12-10 05:53:50.269423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.406 [2024-12-10 05:53:50.269429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.406 [2024-12-10 05:53:50.269445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-12-10 05:53:50.279389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.406 [2024-12-10 05:53:50.279461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.406 [2024-12-10 05:53:50.279474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.406 [2024-12-10 05:53:50.279481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.406 [2024-12-10 05:53:50.279487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.406 [2024-12-10 05:53:50.279502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-12-10 05:53:50.289483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.406 [2024-12-10 05:53:50.289554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.406 [2024-12-10 05:53:50.289568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.406 [2024-12-10 05:53:50.289575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.406 [2024-12-10 05:53:50.289582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.406 [2024-12-10 05:53:50.289597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.665 [2024-12-10 05:53:50.299422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.665 [2024-12-10 05:53:50.299483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.665 [2024-12-10 05:53:50.299496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.665 [2024-12-10 05:53:50.299503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.665 [2024-12-10 05:53:50.299510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.665 [2024-12-10 05:53:50.299524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.665 qpair failed and we were unable to recover it. 00:29:02.665 [2024-12-10 05:53:50.309521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.665 [2024-12-10 05:53:50.309574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.665 [2024-12-10 05:53:50.309588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.665 [2024-12-10 05:53:50.309598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.665 [2024-12-10 05:53:50.309604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.665 [2024-12-10 05:53:50.309619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.665 qpair failed and we were unable to recover it. 00:29:02.665 [2024-12-10 05:53:50.319590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.665 [2024-12-10 05:53:50.319655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.319668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.319675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.319681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.319696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.329516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.329574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.329587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.329594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.329601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.329615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.339531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.339587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.339599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.339606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.339612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.339627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.349630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.349698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.349712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.349719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.349725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.349743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.359682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.359742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.359755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.359762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.359768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.359783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.369696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.369749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.369762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.369769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.369776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.369791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.379735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.379790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.379804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.379811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.379817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.379832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.389734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.389786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.389799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.389806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.389812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.389826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.399774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.399862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.399876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.399883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.399889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.399903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.409794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.409880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.409893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.409900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.409907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.409921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.419864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.419916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.419929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.419936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.419942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.419957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.429905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.429965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.429978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.429985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.429991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.430006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.439921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.439979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.439993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.440003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.440009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.666 [2024-12-10 05:53:50.440023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.666 qpair failed and we were unable to recover it. 00:29:02.666 [2024-12-10 05:53:50.449916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.666 [2024-12-10 05:53:50.449969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.666 [2024-12-10 05:53:50.449982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.666 [2024-12-10 05:53:50.449989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.666 [2024-12-10 05:53:50.449995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.667 [2024-12-10 05:53:50.450010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.667 qpair failed and we were unable to recover it. 00:29:02.667 [2024-12-10 05:53:50.459938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.667 [2024-12-10 05:53:50.459993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.667 [2024-12-10 05:53:50.460006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.667 [2024-12-10 05:53:50.460013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.667 [2024-12-10 05:53:50.460020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.667 [2024-12-10 05:53:50.460033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.667 qpair failed and we were unable to recover it. 00:29:02.667 [2024-12-10 05:53:50.469961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.667 [2024-12-10 05:53:50.470012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.667 [2024-12-10 05:53:50.470025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.667 [2024-12-10 05:53:50.470032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.667 [2024-12-10 05:53:50.470039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.667 [2024-12-10 05:53:50.470054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.667 qpair failed and we were unable to recover it. 00:29:02.667 [2024-12-10 05:53:50.479982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.667 [2024-12-10 05:53:50.480042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.667 [2024-12-10 05:53:50.480056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.667 [2024-12-10 05:53:50.480063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.667 [2024-12-10 05:53:50.480070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.667 [2024-12-10 05:53:50.480087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.667 qpair failed and we were unable to recover it. 00:29:02.667 [2024-12-10 05:53:50.490018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.667 [2024-12-10 05:53:50.490080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.667 [2024-12-10 05:53:50.490094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.667 [2024-12-10 05:53:50.490101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.667 [2024-12-10 05:53:50.490107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.667 [2024-12-10 05:53:50.490122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.667 qpair failed and we were unable to recover it. 00:29:02.667 [2024-12-10 05:53:50.500097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.667 [2024-12-10 05:53:50.500153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.667 [2024-12-10 05:53:50.500170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.667 [2024-12-10 05:53:50.500177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.667 [2024-12-10 05:53:50.500184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.667 [2024-12-10 05:53:50.500199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.667 qpair failed and we were unable to recover it. 00:29:02.667 [2024-12-10 05:53:50.510151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.667 [2024-12-10 05:53:50.510211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.667 [2024-12-10 05:53:50.510225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.667 [2024-12-10 05:53:50.510232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.667 [2024-12-10 05:53:50.510238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.667 [2024-12-10 05:53:50.510253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.667 qpair failed and we were unable to recover it. 00:29:02.667 [2024-12-10 05:53:50.520142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.667 [2024-12-10 05:53:50.520203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.667 [2024-12-10 05:53:50.520216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.667 [2024-12-10 05:53:50.520223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.667 [2024-12-10 05:53:50.520229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.667 [2024-12-10 05:53:50.520244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.667 qpair failed and we were unable to recover it. 00:29:02.667 [2024-12-10 05:53:50.530237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.667 [2024-12-10 05:53:50.530291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.667 [2024-12-10 05:53:50.530304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.667 [2024-12-10 05:53:50.530311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.667 [2024-12-10 05:53:50.530317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.667 [2024-12-10 05:53:50.530332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.667 qpair failed and we were unable to recover it. 00:29:02.667 [2024-12-10 05:53:50.540258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.667 [2024-12-10 05:53:50.540316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.667 [2024-12-10 05:53:50.540329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.667 [2024-12-10 05:53:50.540337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.667 [2024-12-10 05:53:50.540343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.667 [2024-12-10 05:53:50.540358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.667 qpair failed and we were unable to recover it. 00:29:02.667 [2024-12-10 05:53:50.550253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.667 [2024-12-10 05:53:50.550306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.667 [2024-12-10 05:53:50.550318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.667 [2024-12-10 05:53:50.550325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.667 [2024-12-10 05:53:50.550332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.667 [2024-12-10 05:53:50.550347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.667 qpair failed and we were unable to recover it. 00:29:02.927 [2024-12-10 05:53:50.560253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.927 [2024-12-10 05:53:50.560307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.927 [2024-12-10 05:53:50.560321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.927 [2024-12-10 05:53:50.560327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.927 [2024-12-10 05:53:50.560333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.927 [2024-12-10 05:53:50.560349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.927 qpair failed and we were unable to recover it. 00:29:02.927 [2024-12-10 05:53:50.570279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.927 [2024-12-10 05:53:50.570334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.927 [2024-12-10 05:53:50.570350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.927 [2024-12-10 05:53:50.570357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.927 [2024-12-10 05:53:50.570364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.927 [2024-12-10 05:53:50.570378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.927 qpair failed and we were unable to recover it. 00:29:02.927 [2024-12-10 05:53:50.580298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.927 [2024-12-10 05:53:50.580349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.927 [2024-12-10 05:53:50.580363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.927 [2024-12-10 05:53:50.580369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.927 [2024-12-10 05:53:50.580376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.927 [2024-12-10 05:53:50.580391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.927 qpair failed and we were unable to recover it. 00:29:02.927 [2024-12-10 05:53:50.590321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.927 [2024-12-10 05:53:50.590374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.927 [2024-12-10 05:53:50.590386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.927 [2024-12-10 05:53:50.590393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.927 [2024-12-10 05:53:50.590400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.927 [2024-12-10 05:53:50.590414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.927 qpair failed and we were unable to recover it. 00:29:02.927 [2024-12-10 05:53:50.600371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.927 [2024-12-10 05:53:50.600425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.927 [2024-12-10 05:53:50.600438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.927 [2024-12-10 05:53:50.600445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.600453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.600467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.610396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.610455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.610468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.610475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.610485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.610499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.620426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.620473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.620487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.620494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.620500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.620515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.630443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.630546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.630559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.630566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.630572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.630587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.640495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.640573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.640586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.640594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.640600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.640614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.650505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.650561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.650574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.650581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.650587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.650602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.660520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.660576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.660589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.660595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.660602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.660617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.670573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.670629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.670641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.670648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.670654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.670669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.680609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.680664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.680677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.680684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.680690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.680704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.690612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.690666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.690678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.690685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.690691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.690706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.700657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.700710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.700726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.700733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.700740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.700755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.710657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.710711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.710725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.710732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.710738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.710753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.720700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.720756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.720769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.720776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.720783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.720797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.928 [2024-12-10 05:53:50.730733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.928 [2024-12-10 05:53:50.730788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.928 [2024-12-10 05:53:50.730802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.928 [2024-12-10 05:53:50.730808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.928 [2024-12-10 05:53:50.730815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.928 [2024-12-10 05:53:50.730829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.928 qpair failed and we were unable to recover it. 00:29:02.929 [2024-12-10 05:53:50.740774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.929 [2024-12-10 05:53:50.740855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.929 [2024-12-10 05:53:50.740868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.929 [2024-12-10 05:53:50.740875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.929 [2024-12-10 05:53:50.740884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.929 [2024-12-10 05:53:50.740899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.929 qpair failed and we were unable to recover it. 00:29:02.929 [2024-12-10 05:53:50.750802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.929 [2024-12-10 05:53:50.750858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.929 [2024-12-10 05:53:50.750871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.929 [2024-12-10 05:53:50.750879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.929 [2024-12-10 05:53:50.750885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.929 [2024-12-10 05:53:50.750901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.929 qpair failed and we were unable to recover it. 00:29:02.929 [2024-12-10 05:53:50.760817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.929 [2024-12-10 05:53:50.760870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.929 [2024-12-10 05:53:50.760883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.929 [2024-12-10 05:53:50.760890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.929 [2024-12-10 05:53:50.760897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.929 [2024-12-10 05:53:50.760912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.929 qpair failed and we were unable to recover it. 00:29:02.929 [2024-12-10 05:53:50.770847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.929 [2024-12-10 05:53:50.770899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.929 [2024-12-10 05:53:50.770912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.929 [2024-12-10 05:53:50.770919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.929 [2024-12-10 05:53:50.770926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.929 [2024-12-10 05:53:50.770941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.929 qpair failed and we were unable to recover it. 00:29:02.929 [2024-12-10 05:53:50.780896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.929 [2024-12-10 05:53:50.780950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.929 [2024-12-10 05:53:50.780964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.929 [2024-12-10 05:53:50.780971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.929 [2024-12-10 05:53:50.780977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.929 [2024-12-10 05:53:50.780992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.929 qpair failed and we were unable to recover it. 00:29:02.929 [2024-12-10 05:53:50.790902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.929 [2024-12-10 05:53:50.790956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.929 [2024-12-10 05:53:50.790970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.929 [2024-12-10 05:53:50.790977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.929 [2024-12-10 05:53:50.790983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.929 [2024-12-10 05:53:50.790998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.929 qpair failed and we were unable to recover it. 00:29:02.929 [2024-12-10 05:53:50.800983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.929 [2024-12-10 05:53:50.801043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.929 [2024-12-10 05:53:50.801059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.929 [2024-12-10 05:53:50.801066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.929 [2024-12-10 05:53:50.801073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.929 [2024-12-10 05:53:50.801088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.929 qpair failed and we were unable to recover it. 00:29:02.929 [2024-12-10 05:53:50.810965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.929 [2024-12-10 05:53:50.811026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.929 [2024-12-10 05:53:50.811040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.929 [2024-12-10 05:53:50.811046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.929 [2024-12-10 05:53:50.811053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:02.929 [2024-12-10 05:53:50.811068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.929 qpair failed and we were unable to recover it. 00:29:03.189 [2024-12-10 05:53:50.820943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.189 [2024-12-10 05:53:50.820993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.189 [2024-12-10 05:53:50.821006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.189 [2024-12-10 05:53:50.821013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.189 [2024-12-10 05:53:50.821019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.189 [2024-12-10 05:53:50.821035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.189 qpair failed and we were unable to recover it. 00:29:03.189 [2024-12-10 05:53:50.831024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.189 [2024-12-10 05:53:50.831083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.189 [2024-12-10 05:53:50.831097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.189 [2024-12-10 05:53:50.831103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.189 [2024-12-10 05:53:50.831110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.189 [2024-12-10 05:53:50.831124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.189 qpair failed and we were unable to recover it. 00:29:03.189 [2024-12-10 05:53:50.841048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.189 [2024-12-10 05:53:50.841106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.189 [2024-12-10 05:53:50.841119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.189 [2024-12-10 05:53:50.841126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.189 [2024-12-10 05:53:50.841133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.189 [2024-12-10 05:53:50.841147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.189 qpair failed and we were unable to recover it. 00:29:03.189 [2024-12-10 05:53:50.851069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.189 [2024-12-10 05:53:50.851149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.189 [2024-12-10 05:53:50.851163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.189 [2024-12-10 05:53:50.851176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.189 [2024-12-10 05:53:50.851182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.189 [2024-12-10 05:53:50.851197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.189 qpair failed and we were unable to recover it. 00:29:03.189 [2024-12-10 05:53:50.861038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.189 [2024-12-10 05:53:50.861093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.189 [2024-12-10 05:53:50.861106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.189 [2024-12-10 05:53:50.861113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.189 [2024-12-10 05:53:50.861119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.189 [2024-12-10 05:53:50.861134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.189 qpair failed and we were unable to recover it. 00:29:03.189 [2024-12-10 05:53:50.871074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.189 [2024-12-10 05:53:50.871171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.189 [2024-12-10 05:53:50.871185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.189 [2024-12-10 05:53:50.871195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.189 [2024-12-10 05:53:50.871202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.871217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.881160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.881221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.881234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.881241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.881247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.881262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.891185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.891236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.891250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.891257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.891263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.891279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.901214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.901269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.901282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.901289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.901295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.901310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.911265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.911322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.911336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.911343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.911349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.911369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.921299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.921354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.921367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.921373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.921380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.921395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.931255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.931313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.931326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.931334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.931340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.931354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.941351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.941409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.941423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.941430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.941436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.941450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.951340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.951400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.951413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.951421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.951426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.951441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.961413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.961488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.961504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.961512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.961520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.961534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.971358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.971416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.971429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.971436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.971442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.971457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.981489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.981544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.981557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.981563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.981570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.981585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:50.991465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:50.991518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:50.991532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:50.991539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:50.991545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:50.991559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.190 [2024-12-10 05:53:51.001481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.190 [2024-12-10 05:53:51.001546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.190 [2024-12-10 05:53:51.001559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.190 [2024-12-10 05:53:51.001570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.190 [2024-12-10 05:53:51.001576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.190 [2024-12-10 05:53:51.001591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.190 qpair failed and we were unable to recover it. 00:29:03.191 [2024-12-10 05:53:51.011470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.191 [2024-12-10 05:53:51.011526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.191 [2024-12-10 05:53:51.011539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.191 [2024-12-10 05:53:51.011546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.191 [2024-12-10 05:53:51.011552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.191 [2024-12-10 05:53:51.011568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.191 qpair failed and we were unable to recover it. 00:29:03.191 [2024-12-10 05:53:51.021538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.191 [2024-12-10 05:53:51.021622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.191 [2024-12-10 05:53:51.021635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.191 [2024-12-10 05:53:51.021642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.191 [2024-12-10 05:53:51.021648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.191 [2024-12-10 05:53:51.021663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.191 qpair failed and we were unable to recover it. 00:29:03.191 [2024-12-10 05:53:51.031592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.191 [2024-12-10 05:53:51.031652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.191 [2024-12-10 05:53:51.031665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.191 [2024-12-10 05:53:51.031672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.191 [2024-12-10 05:53:51.031678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.191 [2024-12-10 05:53:51.031693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.191 qpair failed and we were unable to recover it. 00:29:03.191 [2024-12-10 05:53:51.041678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.191 [2024-12-10 05:53:51.041739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.191 [2024-12-10 05:53:51.041754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.191 [2024-12-10 05:53:51.041762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.191 [2024-12-10 05:53:51.041770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.191 [2024-12-10 05:53:51.041787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.191 qpair failed and we were unable to recover it. 00:29:03.191 [2024-12-10 05:53:51.051660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.191 [2024-12-10 05:53:51.051718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.191 [2024-12-10 05:53:51.051732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.191 [2024-12-10 05:53:51.051740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.191 [2024-12-10 05:53:51.051746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.191 [2024-12-10 05:53:51.051760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.191 qpair failed and we were unable to recover it. 00:29:03.191 [2024-12-10 05:53:51.061618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.191 [2024-12-10 05:53:51.061674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.191 [2024-12-10 05:53:51.061688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.191 [2024-12-10 05:53:51.061694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.191 [2024-12-10 05:53:51.061701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.191 [2024-12-10 05:53:51.061715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.191 qpair failed and we were unable to recover it. 00:29:03.191 [2024-12-10 05:53:51.071686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.191 [2024-12-10 05:53:51.071748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.191 [2024-12-10 05:53:51.071760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.191 [2024-12-10 05:53:51.071768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.191 [2024-12-10 05:53:51.071774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.191 [2024-12-10 05:53:51.071789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.191 qpair failed and we were unable to recover it. 00:29:03.451 [2024-12-10 05:53:51.081733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.451 [2024-12-10 05:53:51.081785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.451 [2024-12-10 05:53:51.081798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.451 [2024-12-10 05:53:51.081805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.451 [2024-12-10 05:53:51.081811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.451 [2024-12-10 05:53:51.081826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.451 qpair failed and we were unable to recover it. 00:29:03.451 [2024-12-10 05:53:51.091769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.451 [2024-12-10 05:53:51.091825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.451 [2024-12-10 05:53:51.091838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.451 [2024-12-10 05:53:51.091845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.451 [2024-12-10 05:53:51.091851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.451 [2024-12-10 05:53:51.091866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.451 qpair failed and we were unable to recover it. 00:29:03.451 [2024-12-10 05:53:51.101735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.451 [2024-12-10 05:53:51.101816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.451 [2024-12-10 05:53:51.101830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.451 [2024-12-10 05:53:51.101838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.451 [2024-12-10 05:53:51.101844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.451 [2024-12-10 05:53:51.101859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.451 qpair failed and we were unable to recover it. 00:29:03.451 [2024-12-10 05:53:51.111797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.451 [2024-12-10 05:53:51.111854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.451 [2024-12-10 05:53:51.111867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.451 [2024-12-10 05:53:51.111874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.451 [2024-12-10 05:53:51.111880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.451 [2024-12-10 05:53:51.111894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.451 qpair failed and we were unable to recover it. 00:29:03.451 [2024-12-10 05:53:51.121781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.451 [2024-12-10 05:53:51.121835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.451 [2024-12-10 05:53:51.121848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.451 [2024-12-10 05:53:51.121854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.451 [2024-12-10 05:53:51.121861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.451 [2024-12-10 05:53:51.121875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.451 qpair failed and we were unable to recover it. 00:29:03.451 [2024-12-10 05:53:51.131898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.451 [2024-12-10 05:53:51.131963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.451 [2024-12-10 05:53:51.131979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.451 [2024-12-10 05:53:51.131986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.451 [2024-12-10 05:53:51.131993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.451 [2024-12-10 05:53:51.132007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.451 qpair failed and we were unable to recover it. 00:29:03.451 [2024-12-10 05:53:51.141972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.451 [2024-12-10 05:53:51.142032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.451 [2024-12-10 05:53:51.142046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.451 [2024-12-10 05:53:51.142053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.451 [2024-12-10 05:53:51.142059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.451 [2024-12-10 05:53:51.142073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.451 qpair failed and we were unable to recover it. 00:29:03.451 [2024-12-10 05:53:51.151965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.451 [2024-12-10 05:53:51.152042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.451 [2024-12-10 05:53:51.152056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.451 [2024-12-10 05:53:51.152063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.451 [2024-12-10 05:53:51.152069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.451 [2024-12-10 05:53:51.152084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.451 qpair failed and we were unable to recover it. 00:29:03.451 [2024-12-10 05:53:51.161977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.451 [2024-12-10 05:53:51.162032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.451 [2024-12-10 05:53:51.162046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.451 [2024-12-10 05:53:51.162053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.451 [2024-12-10 05:53:51.162059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.451 [2024-12-10 05:53:51.162074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.451 qpair failed and we were unable to recover it. 00:29:03.451 [2024-12-10 05:53:51.171927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.451 [2024-12-10 05:53:51.171982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.451 [2024-12-10 05:53:51.171995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.451 [2024-12-10 05:53:51.172001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.451 [2024-12-10 05:53:51.172011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.451 [2024-12-10 05:53:51.172026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.451 qpair failed and we were unable to recover it. 00:29:03.451 [2024-12-10 05:53:51.181954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.451 [2024-12-10 05:53:51.182016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.182029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.182037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.182043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.182058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.191969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.192026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.192041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.192049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.192056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.192071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.202014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.202073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.202086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.202093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.202099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.202115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.212117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.212179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.212194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.212202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.212208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.212224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.222050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.222102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.222115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.222122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.222128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.222143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.232170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.232222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.232235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.232242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.232249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.232263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.242126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.242186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.242199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.242207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.242213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.242228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.252145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.252212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.252225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.252232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.252239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.252253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.262241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.262294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.262309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.262317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.262324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.262339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.272268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.272321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.272334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.272340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.272347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.272361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.282305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.282361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.282373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.282380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.282386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.282401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.292330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.292386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.292399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.292406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.292412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.292426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.302398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.302454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.302467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.302473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.302483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.452 [2024-12-10 05:53:51.302497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.452 qpair failed and we were unable to recover it. 00:29:03.452 [2024-12-10 05:53:51.312376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.452 [2024-12-10 05:53:51.312432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.452 [2024-12-10 05:53:51.312446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.452 [2024-12-10 05:53:51.312454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.452 [2024-12-10 05:53:51.312460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.453 [2024-12-10 05:53:51.312475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.453 qpair failed and we were unable to recover it. 00:29:03.453 [2024-12-10 05:53:51.322413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.453 [2024-12-10 05:53:51.322470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.453 [2024-12-10 05:53:51.322484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.453 [2024-12-10 05:53:51.322491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.453 [2024-12-10 05:53:51.322497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.453 [2024-12-10 05:53:51.322512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.453 qpair failed and we were unable to recover it. 00:29:03.453 [2024-12-10 05:53:51.332384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.453 [2024-12-10 05:53:51.332448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.453 [2024-12-10 05:53:51.332462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.453 [2024-12-10 05:53:51.332470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.453 [2024-12-10 05:53:51.332476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.453 [2024-12-10 05:53:51.332491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.453 qpair failed and we were unable to recover it. 00:29:03.712 [2024-12-10 05:53:51.342397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.712 [2024-12-10 05:53:51.342495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.712 [2024-12-10 05:53:51.342509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.712 [2024-12-10 05:53:51.342517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.712 [2024-12-10 05:53:51.342523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.712 [2024-12-10 05:53:51.342538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.712 qpair failed and we were unable to recover it. 00:29:03.712 [2024-12-10 05:53:51.352501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.712 [2024-12-10 05:53:51.352580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.712 [2024-12-10 05:53:51.352592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.712 [2024-12-10 05:53:51.352599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.712 [2024-12-10 05:53:51.352605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.712 [2024-12-10 05:53:51.352620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.712 qpair failed and we were unable to recover it. 00:29:03.712 [2024-12-10 05:53:51.362577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.712 [2024-12-10 05:53:51.362636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.712 [2024-12-10 05:53:51.362649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.362656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.362662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.362677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.372548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.372604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.372618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.372624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.372631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.372645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.382580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.382686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.382710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.382717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.382724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.382745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.392666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.392722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.392736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.392743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.392749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.392764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.402699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.402757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.402771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.402777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.402784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.402799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.412694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.412746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.412759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.412766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.412772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.412788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.422720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.422775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.422788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.422794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.422801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.422816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.432764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.432816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.432828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.432838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.432844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.432859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.442788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.442846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.442859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.442866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.442873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.442888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.452818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.452875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.452888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.452895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.452901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.452916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.462759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.462820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.462834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.462841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.462847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.462862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.472865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.472921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.472934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.472941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.472948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.472965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.482901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.482956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.482971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.482977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.482984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.482999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.713 qpair failed and we were unable to recover it. 00:29:03.713 [2024-12-10 05:53:51.492930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.713 [2024-12-10 05:53:51.492990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.713 [2024-12-10 05:53:51.493003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.713 [2024-12-10 05:53:51.493011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.713 [2024-12-10 05:53:51.493017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.713 [2024-12-10 05:53:51.493032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.714 qpair failed and we were unable to recover it. 00:29:03.714 [2024-12-10 05:53:51.502956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.714 [2024-12-10 05:53:51.503011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.714 [2024-12-10 05:53:51.503024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.714 [2024-12-10 05:53:51.503031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.714 [2024-12-10 05:53:51.503038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.714 [2024-12-10 05:53:51.503052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.714 qpair failed and we were unable to recover it. 00:29:03.714 [2024-12-10 05:53:51.513004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.714 [2024-12-10 05:53:51.513057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.714 [2024-12-10 05:53:51.513070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.714 [2024-12-10 05:53:51.513077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.714 [2024-12-10 05:53:51.513083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.714 [2024-12-10 05:53:51.513098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.714 qpair failed and we were unable to recover it. 00:29:03.714 [2024-12-10 05:53:51.523021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.714 [2024-12-10 05:53:51.523081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.714 [2024-12-10 05:53:51.523095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.714 [2024-12-10 05:53:51.523102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.714 [2024-12-10 05:53:51.523108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.714 [2024-12-10 05:53:51.523122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.714 qpair failed and we were unable to recover it. 00:29:03.714 [2024-12-10 05:53:51.533076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.714 [2024-12-10 05:53:51.533183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.714 [2024-12-10 05:53:51.533197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.714 [2024-12-10 05:53:51.533204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.714 [2024-12-10 05:53:51.533210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.714 [2024-12-10 05:53:51.533224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.714 qpair failed and we were unable to recover it. 00:29:03.714 [2024-12-10 05:53:51.543075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.714 [2024-12-10 05:53:51.543134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.714 [2024-12-10 05:53:51.543147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.714 [2024-12-10 05:53:51.543154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.714 [2024-12-10 05:53:51.543160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.714 [2024-12-10 05:53:51.543179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.714 qpair failed and we were unable to recover it. 00:29:03.714 [2024-12-10 05:53:51.553101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.714 [2024-12-10 05:53:51.553174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.714 [2024-12-10 05:53:51.553188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.714 [2024-12-10 05:53:51.553195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.714 [2024-12-10 05:53:51.553201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.714 [2024-12-10 05:53:51.553216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.714 qpair failed and we were unable to recover it. 00:29:03.714 [2024-12-10 05:53:51.563176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.714 [2024-12-10 05:53:51.563231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.714 [2024-12-10 05:53:51.563247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.714 [2024-12-10 05:53:51.563254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.714 [2024-12-10 05:53:51.563260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.714 [2024-12-10 05:53:51.563275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.714 qpair failed and we were unable to recover it. 00:29:03.714 [2024-12-10 05:53:51.573157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.714 [2024-12-10 05:53:51.573222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.714 [2024-12-10 05:53:51.573236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.714 [2024-12-10 05:53:51.573243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.714 [2024-12-10 05:53:51.573249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.714 [2024-12-10 05:53:51.573264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.714 qpair failed and we were unable to recover it. 00:29:03.714 [2024-12-10 05:53:51.583189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.714 [2024-12-10 05:53:51.583240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.714 [2024-12-10 05:53:51.583253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.714 [2024-12-10 05:53:51.583260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.714 [2024-12-10 05:53:51.583267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.714 [2024-12-10 05:53:51.583281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.714 qpair failed and we were unable to recover it. 00:29:03.714 [2024-12-10 05:53:51.593263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.714 [2024-12-10 05:53:51.593369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.714 [2024-12-10 05:53:51.593383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.714 [2024-12-10 05:53:51.593390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.714 [2024-12-10 05:53:51.593397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.714 [2024-12-10 05:53:51.593412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.714 qpair failed and we were unable to recover it. 00:29:03.974 [2024-12-10 05:53:51.603254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.974 [2024-12-10 05:53:51.603310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.974 [2024-12-10 05:53:51.603323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.974 [2024-12-10 05:53:51.603330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.974 [2024-12-10 05:53:51.603337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.974 [2024-12-10 05:53:51.603354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-12-10 05:53:51.613277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.974 [2024-12-10 05:53:51.613329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.974 [2024-12-10 05:53:51.613343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.974 [2024-12-10 05:53:51.613350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.974 [2024-12-10 05:53:51.613357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.974 [2024-12-10 05:53:51.613371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-12-10 05:53:51.623284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.974 [2024-12-10 05:53:51.623345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.974 [2024-12-10 05:53:51.623358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.974 [2024-12-10 05:53:51.623365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.974 [2024-12-10 05:53:51.623371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.974 [2024-12-10 05:53:51.623386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-12-10 05:53:51.633332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.974 [2024-12-10 05:53:51.633385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.974 [2024-12-10 05:53:51.633398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.974 [2024-12-10 05:53:51.633405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.974 [2024-12-10 05:53:51.633411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.974 [2024-12-10 05:53:51.633426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-12-10 05:53:51.643376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.643445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.643458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.643465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.643472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.643486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.653401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.653457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.653471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.653478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.653484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.653499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.663408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.663479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.663493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.663499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.663506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.663520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.673470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.673522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.673536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.673542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.673549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.673563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.683502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.683559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.683572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.683579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.683585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.683601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.693539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.693591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.693609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.693616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.693622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.693637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.703566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.703625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.703639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.703646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.703652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.703668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.713578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.713631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.713644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.713651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.713658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.713672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.723601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.723659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.723673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.723680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.723686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.723701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.733602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.733675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.733688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.733695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.733704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.733719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.743653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.743707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.743720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.743727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.743733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.743748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.753685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.753740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.753753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.753760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.753767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.753782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.763740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.763798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.763811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.763817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.975 [2024-12-10 05:53:51.763824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.975 [2024-12-10 05:53:51.763839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-12-10 05:53:51.773674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.975 [2024-12-10 05:53:51.773779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.975 [2024-12-10 05:53:51.773792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.975 [2024-12-10 05:53:51.773799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.976 [2024-12-10 05:53:51.773806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.976 [2024-12-10 05:53:51.773821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-12-10 05:53:51.783795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.976 [2024-12-10 05:53:51.783860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.976 [2024-12-10 05:53:51.783873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.976 [2024-12-10 05:53:51.783880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.976 [2024-12-10 05:53:51.783887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.976 [2024-12-10 05:53:51.783902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-12-10 05:53:51.793806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.976 [2024-12-10 05:53:51.793858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.976 [2024-12-10 05:53:51.793871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.976 [2024-12-10 05:53:51.793878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.976 [2024-12-10 05:53:51.793885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.976 [2024-12-10 05:53:51.793899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-12-10 05:53:51.803836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.976 [2024-12-10 05:53:51.803893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.976 [2024-12-10 05:53:51.803906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.976 [2024-12-10 05:53:51.803913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.976 [2024-12-10 05:53:51.803920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.976 [2024-12-10 05:53:51.803934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-12-10 05:53:51.813788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.976 [2024-12-10 05:53:51.813842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.976 [2024-12-10 05:53:51.813855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.976 [2024-12-10 05:53:51.813862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.976 [2024-12-10 05:53:51.813868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.976 [2024-12-10 05:53:51.813883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-12-10 05:53:51.823872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.976 [2024-12-10 05:53:51.823926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.976 [2024-12-10 05:53:51.823941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.976 [2024-12-10 05:53:51.823948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.976 [2024-12-10 05:53:51.823955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.976 [2024-12-10 05:53:51.823969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-12-10 05:53:51.833940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.976 [2024-12-10 05:53:51.834002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.976 [2024-12-10 05:53:51.834016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.976 [2024-12-10 05:53:51.834023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.976 [2024-12-10 05:53:51.834029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.976 [2024-12-10 05:53:51.834044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-12-10 05:53:51.843945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.976 [2024-12-10 05:53:51.844002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.976 [2024-12-10 05:53:51.844016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.976 [2024-12-10 05:53:51.844022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.976 [2024-12-10 05:53:51.844029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.976 [2024-12-10 05:53:51.844043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-12-10 05:53:51.853967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.976 [2024-12-10 05:53:51.854060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.976 [2024-12-10 05:53:51.854073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.976 [2024-12-10 05:53:51.854080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.976 [2024-12-10 05:53:51.854087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.976 [2024-12-10 05:53:51.854101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-12-10 05:53:51.863992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.976 [2024-12-10 05:53:51.864074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.976 [2024-12-10 05:53:51.864087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.976 [2024-12-10 05:53:51.864097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.976 [2024-12-10 05:53:51.864104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:03.976 [2024-12-10 05:53:51.864119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.976 qpair failed and we were unable to recover it. 00:29:04.236 [2024-12-10 05:53:51.873985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.236 [2024-12-10 05:53:51.874036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.236 [2024-12-10 05:53:51.874049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.236 [2024-12-10 05:53:51.874056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.236 [2024-12-10 05:53:51.874062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.236 [2024-12-10 05:53:51.874077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.236 qpair failed and we were unable to recover it. 00:29:04.236 [2024-12-10 05:53:51.884050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.236 [2024-12-10 05:53:51.884109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.236 [2024-12-10 05:53:51.884123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.236 [2024-12-10 05:53:51.884131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.236 [2024-12-10 05:53:51.884137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.236 [2024-12-10 05:53:51.884152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.236 qpair failed and we were unable to recover it. 00:29:04.236 [2024-12-10 05:53:51.894068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.236 [2024-12-10 05:53:51.894119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.236 [2024-12-10 05:53:51.894132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.236 [2024-12-10 05:53:51.894139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.236 [2024-12-10 05:53:51.894145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.236 [2024-12-10 05:53:51.894160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.236 qpair failed and we were unable to recover it. 00:29:04.236 [2024-12-10 05:53:51.904096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.236 [2024-12-10 05:53:51.904194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.236 [2024-12-10 05:53:51.904208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.236 [2024-12-10 05:53:51.904215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.236 [2024-12-10 05:53:51.904221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.236 [2024-12-10 05:53:51.904236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.236 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:51.914104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:51.914158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:51.914177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:51.914184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:51.914190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:51.914205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:51.924159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:51.924218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:51.924231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:51.924238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:51.924245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:51.924260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:51.934181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:51.934271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:51.934284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:51.934292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:51.934298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:51.934313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:51.944209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:51.944262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:51.944275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:51.944282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:51.944289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:51.944303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:51.954292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:51.954375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:51.954389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:51.954396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:51.954402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:51.954417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:51.964211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:51.964277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:51.964290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:51.964297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:51.964304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:51.964319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:51.974283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:51.974337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:51.974351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:51.974357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:51.974363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:51.974378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:51.984322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:51.984380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:51.984393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:51.984400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:51.984407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:51.984422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:51.994344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:51.994394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:51.994407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:51.994417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:51.994423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:51.994438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:52.004377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:52.004440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:52.004454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:52.004461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:52.004468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:52.004482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:52.014405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:52.014463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:52.014477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:52.014484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:52.014491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:52.014505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:52.024430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:52.024481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:52.024494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:52.024501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:52.024507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:52.024522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:52.034471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:52.034533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.237 [2024-12-10 05:53:52.034547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.237 [2024-12-10 05:53:52.034554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.237 [2024-12-10 05:53:52.034561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.237 [2024-12-10 05:53:52.034578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.237 qpair failed and we were unable to recover it. 00:29:04.237 [2024-12-10 05:53:52.044493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.237 [2024-12-10 05:53:52.044551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.238 [2024-12-10 05:53:52.044564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.238 [2024-12-10 05:53:52.044571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.238 [2024-12-10 05:53:52.044577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.238 [2024-12-10 05:53:52.044592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.238 qpair failed and we were unable to recover it. 00:29:04.238 [2024-12-10 05:53:52.054522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.238 [2024-12-10 05:53:52.054574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.238 [2024-12-10 05:53:52.054589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.238 [2024-12-10 05:53:52.054596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.238 [2024-12-10 05:53:52.054603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.238 [2024-12-10 05:53:52.054618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.238 qpair failed and we were unable to recover it. 00:29:04.238 [2024-12-10 05:53:52.064580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.238 [2024-12-10 05:53:52.064632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.238 [2024-12-10 05:53:52.064645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.238 [2024-12-10 05:53:52.064652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.238 [2024-12-10 05:53:52.064658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.238 [2024-12-10 05:53:52.064673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.238 qpair failed and we were unable to recover it. 00:29:04.238 [2024-12-10 05:53:52.074566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.238 [2024-12-10 05:53:52.074613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.238 [2024-12-10 05:53:52.074626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.238 [2024-12-10 05:53:52.074633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.238 [2024-12-10 05:53:52.074639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.238 [2024-12-10 05:53:52.074655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.238 qpair failed and we were unable to recover it. 00:29:04.238 [2024-12-10 05:53:52.084619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.238 [2024-12-10 05:53:52.084681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.238 [2024-12-10 05:53:52.084694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.238 [2024-12-10 05:53:52.084700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.238 [2024-12-10 05:53:52.084707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.238 [2024-12-10 05:53:52.084722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.238 qpair failed and we were unable to recover it. 00:29:04.238 [2024-12-10 05:53:52.094666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.238 [2024-12-10 05:53:52.094716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.238 [2024-12-10 05:53:52.094729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.238 [2024-12-10 05:53:52.094736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.238 [2024-12-10 05:53:52.094742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.238 [2024-12-10 05:53:52.094757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.238 qpair failed and we were unable to recover it. 00:29:04.238 [2024-12-10 05:53:52.104651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.238 [2024-12-10 05:53:52.104703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.238 [2024-12-10 05:53:52.104715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.238 [2024-12-10 05:53:52.104722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.238 [2024-12-10 05:53:52.104729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.238 [2024-12-10 05:53:52.104744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.238 qpair failed and we were unable to recover it. 00:29:04.238 [2024-12-10 05:53:52.114728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.238 [2024-12-10 05:53:52.114794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.238 [2024-12-10 05:53:52.114807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.238 [2024-12-10 05:53:52.114814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.238 [2024-12-10 05:53:52.114821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.238 [2024-12-10 05:53:52.114836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.238 qpair failed and we were unable to recover it. 00:29:04.238 [2024-12-10 05:53:52.124722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.238 [2024-12-10 05:53:52.124779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.238 [2024-12-10 05:53:52.124795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.238 [2024-12-10 05:53:52.124802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.238 [2024-12-10 05:53:52.124809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.238 [2024-12-10 05:53:52.124824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.238 qpair failed and we were unable to recover it. 00:29:04.500 [2024-12-10 05:53:52.134801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.500 [2024-12-10 05:53:52.134866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.500 [2024-12-10 05:53:52.134880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.500 [2024-12-10 05:53:52.134886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.500 [2024-12-10 05:53:52.134892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.500 [2024-12-10 05:53:52.134907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.500 qpair failed and we were unable to recover it. 00:29:04.500 [2024-12-10 05:53:52.144769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.500 [2024-12-10 05:53:52.144825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.500 [2024-12-10 05:53:52.144838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.500 [2024-12-10 05:53:52.144844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.500 [2024-12-10 05:53:52.144851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.500 [2024-12-10 05:53:52.144865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.500 qpair failed and we were unable to recover it. 00:29:04.500 [2024-12-10 05:53:52.154798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.500 [2024-12-10 05:53:52.154852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.500 [2024-12-10 05:53:52.154865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.500 [2024-12-10 05:53:52.154871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.500 [2024-12-10 05:53:52.154878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.501 [2024-12-10 05:53:52.154893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.501 qpair failed and we were unable to recover it. 00:29:04.501 [2024-12-10 05:53:52.164830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.501 [2024-12-10 05:53:52.164886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.501 [2024-12-10 05:53:52.164898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.501 [2024-12-10 05:53:52.164905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.501 [2024-12-10 05:53:52.164912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.501 [2024-12-10 05:53:52.164929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.501 qpair failed and we were unable to recover it. 00:29:04.501 [2024-12-10 05:53:52.174898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.501 [2024-12-10 05:53:52.174954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.501 [2024-12-10 05:53:52.174967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.501 [2024-12-10 05:53:52.174974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.501 [2024-12-10 05:53:52.174980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.501 [2024-12-10 05:53:52.174995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.501 qpair failed and we were unable to recover it. 00:29:04.501 [2024-12-10 05:53:52.184814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.501 [2024-12-10 05:53:52.184866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.501 [2024-12-10 05:53:52.184880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.501 [2024-12-10 05:53:52.184886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.501 [2024-12-10 05:53:52.184893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.501 [2024-12-10 05:53:52.184908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.501 qpair failed and we were unable to recover it. 00:29:04.501 [2024-12-10 05:53:52.194900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.501 [2024-12-10 05:53:52.194955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.501 [2024-12-10 05:53:52.194968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.501 [2024-12-10 05:53:52.194975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.501 [2024-12-10 05:53:52.194981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.501 [2024-12-10 05:53:52.194995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.501 qpair failed and we were unable to recover it. 00:29:04.501 [2024-12-10 05:53:52.204940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.501 [2024-12-10 05:53:52.204994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.501 [2024-12-10 05:53:52.205007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.501 [2024-12-10 05:53:52.205014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.501 [2024-12-10 05:53:52.205021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.501 [2024-12-10 05:53:52.205035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.501 qpair failed and we were unable to recover it. 00:29:04.501 [2024-12-10 05:53:52.214999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.501 [2024-12-10 05:53:52.215075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.501 [2024-12-10 05:53:52.215088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.501 [2024-12-10 05:53:52.215096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.501 [2024-12-10 05:53:52.215102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.501 [2024-12-10 05:53:52.215116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.501 qpair failed and we were unable to recover it. 00:29:04.501 [2024-12-10 05:53:52.224987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.501 [2024-12-10 05:53:52.225036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.501 [2024-12-10 05:53:52.225049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.501 [2024-12-10 05:53:52.225056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.501 [2024-12-10 05:53:52.225062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.501 [2024-12-10 05:53:52.225077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.501 qpair failed and we were unable to recover it. 00:29:04.501 [2024-12-10 05:53:52.234999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.501 [2024-12-10 05:53:52.235073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.501 [2024-12-10 05:53:52.235087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.501 [2024-12-10 05:53:52.235094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.501 [2024-12-10 05:53:52.235101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.501 [2024-12-10 05:53:52.235116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.501 qpair failed and we were unable to recover it. 00:29:04.501 [2024-12-10 05:53:52.245057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.501 [2024-12-10 05:53:52.245117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.501 [2024-12-10 05:53:52.245131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.501 [2024-12-10 05:53:52.245138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.501 [2024-12-10 05:53:52.245145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.501 [2024-12-10 05:53:52.245161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.501 qpair failed and we were unable to recover it. 00:29:04.501 [2024-12-10 05:53:52.255077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.501 [2024-12-10 05:53:52.255138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.501 [2024-12-10 05:53:52.255154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.501 [2024-12-10 05:53:52.255161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.501 [2024-12-10 05:53:52.255172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.501 [2024-12-10 05:53:52.255187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.501 qpair failed and we were unable to recover it. 00:29:04.501 [2024-12-10 05:53:52.265126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.502 [2024-12-10 05:53:52.265198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.502 [2024-12-10 05:53:52.265212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.502 [2024-12-10 05:53:52.265219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.502 [2024-12-10 05:53:52.265225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.502 [2024-12-10 05:53:52.265241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.502 qpair failed and we were unable to recover it. 00:29:04.502 [2024-12-10 05:53:52.275137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.502 [2024-12-10 05:53:52.275189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.502 [2024-12-10 05:53:52.275203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.502 [2024-12-10 05:53:52.275209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.502 [2024-12-10 05:53:52.275216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.502 [2024-12-10 05:53:52.275231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.502 qpair failed and we were unable to recover it. 00:29:04.502 [2024-12-10 05:53:52.285172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.502 [2024-12-10 05:53:52.285228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.502 [2024-12-10 05:53:52.285242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.502 [2024-12-10 05:53:52.285249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.502 [2024-12-10 05:53:52.285255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.502 [2024-12-10 05:53:52.285271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.502 qpair failed and we were unable to recover it. 00:29:04.502 [2024-12-10 05:53:52.295116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.502 [2024-12-10 05:53:52.295178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.502 [2024-12-10 05:53:52.295191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.502 [2024-12-10 05:53:52.295198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.502 [2024-12-10 05:53:52.295207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.502 [2024-12-10 05:53:52.295222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.502 qpair failed and we were unable to recover it. 00:29:04.502 [2024-12-10 05:53:52.305200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.502 [2024-12-10 05:53:52.305258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.502 [2024-12-10 05:53:52.305271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.502 [2024-12-10 05:53:52.305279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.502 [2024-12-10 05:53:52.305285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.502 [2024-12-10 05:53:52.305300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.502 qpair failed and we were unable to recover it. 00:29:04.502 [2024-12-10 05:53:52.315254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.502 [2024-12-10 05:53:52.315306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.502 [2024-12-10 05:53:52.315319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.502 [2024-12-10 05:53:52.315326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.502 [2024-12-10 05:53:52.315332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.502 [2024-12-10 05:53:52.315346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.502 qpair failed and we were unable to recover it. 00:29:04.502 [2024-12-10 05:53:52.325287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.502 [2024-12-10 05:53:52.325344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.502 [2024-12-10 05:53:52.325357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.502 [2024-12-10 05:53:52.325364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.502 [2024-12-10 05:53:52.325370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.502 [2024-12-10 05:53:52.325385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.502 qpair failed and we were unable to recover it. 00:29:04.502 [2024-12-10 05:53:52.335274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.502 [2024-12-10 05:53:52.335366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.502 [2024-12-10 05:53:52.335379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.502 [2024-12-10 05:53:52.335386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.502 [2024-12-10 05:53:52.335391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.502 [2024-12-10 05:53:52.335406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.502 qpair failed and we were unable to recover it. 00:29:04.502 [2024-12-10 05:53:52.345295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.502 [2024-12-10 05:53:52.345353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.502 [2024-12-10 05:53:52.345365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.502 [2024-12-10 05:53:52.345372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.502 [2024-12-10 05:53:52.345378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.502 [2024-12-10 05:53:52.345393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.502 qpair failed and we were unable to recover it. 00:29:04.502 [2024-12-10 05:53:52.355325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.502 [2024-12-10 05:53:52.355381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.502 [2024-12-10 05:53:52.355395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.502 [2024-12-10 05:53:52.355402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.502 [2024-12-10 05:53:52.355408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.502 [2024-12-10 05:53:52.355424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.502 qpair failed and we were unable to recover it. 00:29:04.502 [2024-12-10 05:53:52.365416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.502 [2024-12-10 05:53:52.365474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.503 [2024-12-10 05:53:52.365487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.503 [2024-12-10 05:53:52.365494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.503 [2024-12-10 05:53:52.365501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.503 [2024-12-10 05:53:52.365515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.503 qpair failed and we were unable to recover it. 00:29:04.503 [2024-12-10 05:53:52.375446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.503 [2024-12-10 05:53:52.375502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.503 [2024-12-10 05:53:52.375515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.503 [2024-12-10 05:53:52.375521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.503 [2024-12-10 05:53:52.375528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.503 [2024-12-10 05:53:52.375542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.503 qpair failed and we were unable to recover it. 00:29:04.503 [2024-12-10 05:53:52.385464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.503 [2024-12-10 05:53:52.385515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.503 [2024-12-10 05:53:52.385531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.503 [2024-12-10 05:53:52.385538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.503 [2024-12-10 05:53:52.385544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.503 [2024-12-10 05:53:52.385558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.503 qpair failed and we were unable to recover it. 00:29:04.807 [2024-12-10 05:53:52.395490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.807 [2024-12-10 05:53:52.395543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.807 [2024-12-10 05:53:52.395556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.807 [2024-12-10 05:53:52.395562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.807 [2024-12-10 05:53:52.395569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.807 [2024-12-10 05:53:52.395584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.807 qpair failed and we were unable to recover it. 00:29:04.807 [2024-12-10 05:53:52.405473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.807 [2024-12-10 05:53:52.405531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.405551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.405559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.405566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.405584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.415494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.415554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.415568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.415575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.415582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.415598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.425588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.425645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.425660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.425672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.425679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.425694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.435638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.435696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.435709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.435717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.435724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.435740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.445634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.445693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.445706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.445713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.445719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.445734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.455598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.455655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.455668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.455674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.455681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.455696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.465689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.465739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.465752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.465759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.465766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.465780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.475683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.475738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.475752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.475760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.475768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.475783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.485717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.485809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.485822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.485829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.485836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.485851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.495810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.495866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.495879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.495886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.495892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.495907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.505786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.505840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.505853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.505860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.505866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.505881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.515772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.515828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.515840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.515847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.515853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.515867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.525780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.525855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.525868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.808 [2024-12-10 05:53:52.525875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.808 [2024-12-10 05:53:52.525882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.808 [2024-12-10 05:53:52.525897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.808 qpair failed and we were unable to recover it. 00:29:04.808 [2024-12-10 05:53:52.535902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.808 [2024-12-10 05:53:52.535967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.808 [2024-12-10 05:53:52.535980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.535987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.535994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.536008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.545828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.545885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.545899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.545905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.545912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.545927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.555931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.555985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.555999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.556009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.556015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.556030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.565980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.566036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.566050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.566056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.566063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.566078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.576041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.576108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.576121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.576128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.576134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.576149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.586006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.586083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.586097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.586104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.586110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.586125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.596046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.596100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.596114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.596121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.596127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.596146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.606087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.606141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.606155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.606162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.606172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.606188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.616145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.616207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.616220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.616227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.616234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.616249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.626134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.626196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.626209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.626217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.626223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.626237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.636155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.636211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.636224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.636231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.636238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.636253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.646235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.646350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.646365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.646372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.646379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.646394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.656144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.656202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.656215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.656222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.656229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.656244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.809 [2024-12-10 05:53:52.666243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.809 [2024-12-10 05:53:52.666298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.809 [2024-12-10 05:53:52.666314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.809 [2024-12-10 05:53:52.666322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.809 [2024-12-10 05:53:52.666329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.809 [2024-12-10 05:53:52.666358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.809 qpair failed and we were unable to recover it. 00:29:04.810 [2024-12-10 05:53:52.676260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.810 [2024-12-10 05:53:52.676309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.810 [2024-12-10 05:53:52.676322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.810 [2024-12-10 05:53:52.676328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.810 [2024-12-10 05:53:52.676335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.810 [2024-12-10 05:53:52.676351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.810 qpair failed and we were unable to recover it. 00:29:04.810 [2024-12-10 05:53:52.686351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.810 [2024-12-10 05:53:52.686416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.810 [2024-12-10 05:53:52.686433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.810 [2024-12-10 05:53:52.686440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.810 [2024-12-10 05:53:52.686446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:04.810 [2024-12-10 05:53:52.686461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.810 qpair failed and we were unable to recover it. 00:29:05.093 [2024-12-10 05:53:52.696340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.093 [2024-12-10 05:53:52.696395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.093 [2024-12-10 05:53:52.696409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.093 [2024-12-10 05:53:52.696416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.093 [2024-12-10 05:53:52.696422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.093 [2024-12-10 05:53:52.696437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-12-10 05:53:52.706359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.093 [2024-12-10 05:53:52.706417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.093 [2024-12-10 05:53:52.706431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.093 [2024-12-10 05:53:52.706438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.093 [2024-12-10 05:53:52.706444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.093 [2024-12-10 05:53:52.706459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-12-10 05:53:52.716398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.093 [2024-12-10 05:53:52.716452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.093 [2024-12-10 05:53:52.716465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.093 [2024-12-10 05:53:52.716472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.093 [2024-12-10 05:53:52.716479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.093 [2024-12-10 05:53:52.716494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-12-10 05:53:52.726426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.093 [2024-12-10 05:53:52.726480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.093 [2024-12-10 05:53:52.726493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.093 [2024-12-10 05:53:52.726499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.093 [2024-12-10 05:53:52.726509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.093 [2024-12-10 05:53:52.726524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-12-10 05:53:52.736455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.093 [2024-12-10 05:53:52.736508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.093 [2024-12-10 05:53:52.736521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.093 [2024-12-10 05:53:52.736528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.093 [2024-12-10 05:53:52.736535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.093 [2024-12-10 05:53:52.736550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-12-10 05:53:52.746475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.093 [2024-12-10 05:53:52.746535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.093 [2024-12-10 05:53:52.746547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.093 [2024-12-10 05:53:52.746555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.093 [2024-12-10 05:53:52.746561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.093 [2024-12-10 05:53:52.746576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-12-10 05:53:52.756440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.093 [2024-12-10 05:53:52.756495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.093 [2024-12-10 05:53:52.756508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.093 [2024-12-10 05:53:52.756515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.093 [2024-12-10 05:53:52.756521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.093 [2024-12-10 05:53:52.756536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-12-10 05:53:52.766539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.093 [2024-12-10 05:53:52.766595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.093 [2024-12-10 05:53:52.766608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.093 [2024-12-10 05:53:52.766615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.093 [2024-12-10 05:53:52.766621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.093 [2024-12-10 05:53:52.766636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-12-10 05:53:52.776495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.093 [2024-12-10 05:53:52.776551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.093 [2024-12-10 05:53:52.776565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.093 [2024-12-10 05:53:52.776572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.093 [2024-12-10 05:53:52.776578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.093 [2024-12-10 05:53:52.776593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-12-10 05:53:52.786592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.093 [2024-12-10 05:53:52.786651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.093 [2024-12-10 05:53:52.786664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.093 [2024-12-10 05:53:52.786670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.093 [2024-12-10 05:53:52.786677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.093 [2024-12-10 05:53:52.786692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-12-10 05:53:52.796611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.093 [2024-12-10 05:53:52.796664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.093 [2024-12-10 05:53:52.796677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.093 [2024-12-10 05:53:52.796684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.093 [2024-12-10 05:53:52.796690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.796705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.806621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.806682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.806696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.806703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.806709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.806724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.816687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.816747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.816763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.816771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.816778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.816793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.826710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.826763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.826776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.826783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.826789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.826804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.836760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.836828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.836840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.836847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.836853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.836868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.846788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.846870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.846883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.846890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.846896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.846911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.856799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.856855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.856868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.856874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.856884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.856899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.866746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.866808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.866821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.866828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.866834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.866848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.876849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.876902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.876914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.876921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.876928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.876942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.886880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.886935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.886948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.886955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.886961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.886976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.896956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.897016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.897031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.897038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.897044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.897059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.906890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.906949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.906964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.906972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.906979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.906994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.916974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.917032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.917046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.917054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.917061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.917076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.926969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.094 [2024-12-10 05:53:52.927029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.094 [2024-12-10 05:53:52.927043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.094 [2024-12-10 05:53:52.927051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.094 [2024-12-10 05:53:52.927057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.094 [2024-12-10 05:53:52.927072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-12-10 05:53:52.937023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.095 [2024-12-10 05:53:52.937081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.095 [2024-12-10 05:53:52.937095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.095 [2024-12-10 05:53:52.937102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.095 [2024-12-10 05:53:52.937109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.095 [2024-12-10 05:53:52.937124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-12-10 05:53:52.947049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.095 [2024-12-10 05:53:52.947103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.095 [2024-12-10 05:53:52.947119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.095 [2024-12-10 05:53:52.947126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.095 [2024-12-10 05:53:52.947133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.095 [2024-12-10 05:53:52.947147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-12-10 05:53:52.957070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.095 [2024-12-10 05:53:52.957124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.095 [2024-12-10 05:53:52.957137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.095 [2024-12-10 05:53:52.957144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.095 [2024-12-10 05:53:52.957151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.095 [2024-12-10 05:53:52.957168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-12-10 05:53:52.967141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.095 [2024-12-10 05:53:52.967203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.095 [2024-12-10 05:53:52.967216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.095 [2024-12-10 05:53:52.967224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.095 [2024-12-10 05:53:52.967230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.095 [2024-12-10 05:53:52.967244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-12-10 05:53:52.977131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.095 [2024-12-10 05:53:52.977195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.095 [2024-12-10 05:53:52.977209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.095 [2024-12-10 05:53:52.977216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.095 [2024-12-10 05:53:52.977223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.095 [2024-12-10 05:53:52.977238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:52.987131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:52.987185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:52.987198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:52.987208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:52.987215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:52.987231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:52.997193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:52.997249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:52.997262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:52.997269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:52.997276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:52.997291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:53.007223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:53.007278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:53.007291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:53.007298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:53.007304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:53.007319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:53.017252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:53.017302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:53.017315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:53.017322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:53.017328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:53.017343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:53.027193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:53.027249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:53.027262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:53.027268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:53.027275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:53.027289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:53.037344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:53.037396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:53.037409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:53.037415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:53.037421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:53.037436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:53.047346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:53.047406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:53.047419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:53.047426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:53.047432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:53.047447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:53.057397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:53.057501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:53.057515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:53.057522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:53.057529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:53.057544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:53.067393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:53.067477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:53.067490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:53.067496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:53.067503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:53.067518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:53.077432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:53.077484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:53.077497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:53.077504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:53.077510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:53.077525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:53.087491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:53.087548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:53.087561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:53.087568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:53.087574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:53.087589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:53.097494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.355 [2024-12-10 05:53:53.097547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.355 [2024-12-10 05:53:53.097560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.355 [2024-12-10 05:53:53.097566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.355 [2024-12-10 05:53:53.097573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.355 [2024-12-10 05:53:53.097588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.355 qpair failed and we were unable to recover it. 00:29:05.355 [2024-12-10 05:53:53.107503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.356 [2024-12-10 05:53:53.107555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.356 [2024-12-10 05:53:53.107568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.356 [2024-12-10 05:53:53.107575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.356 [2024-12-10 05:53:53.107581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.356 [2024-12-10 05:53:53.107596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.356 qpair failed and we were unable to recover it. 00:29:05.356 [2024-12-10 05:53:53.117594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.356 [2024-12-10 05:53:53.117676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.356 [2024-12-10 05:53:53.117691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.356 [2024-12-10 05:53:53.117701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.356 [2024-12-10 05:53:53.117708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.356 [2024-12-10 05:53:53.117724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.356 qpair failed and we were unable to recover it. 00:29:05.356 [2024-12-10 05:53:53.127607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.356 [2024-12-10 05:53:53.127662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.356 [2024-12-10 05:53:53.127675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.356 [2024-12-10 05:53:53.127681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.356 [2024-12-10 05:53:53.127688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.356 [2024-12-10 05:53:53.127703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.356 qpair failed and we were unable to recover it. 00:29:05.356 [2024-12-10 05:53:53.137590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.356 [2024-12-10 05:53:53.137644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.356 [2024-12-10 05:53:53.137657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.356 [2024-12-10 05:53:53.137664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.356 [2024-12-10 05:53:53.137671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.356 [2024-12-10 05:53:53.137685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.356 qpair failed and we were unable to recover it. 00:29:05.356 [2024-12-10 05:53:53.147628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.356 [2024-12-10 05:53:53.147682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.356 [2024-12-10 05:53:53.147696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.356 [2024-12-10 05:53:53.147702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.356 [2024-12-10 05:53:53.147709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.356 [2024-12-10 05:53:53.147724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.356 qpair failed and we were unable to recover it. 00:29:05.356 [2024-12-10 05:53:53.157653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.356 [2024-12-10 05:53:53.157706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.356 [2024-12-10 05:53:53.157719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.356 [2024-12-10 05:53:53.157726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.356 [2024-12-10 05:53:53.157733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.356 [2024-12-10 05:53:53.157753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.356 qpair failed and we were unable to recover it. 00:29:05.356 [2024-12-10 05:53:53.167692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.356 [2024-12-10 05:53:53.167745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.356 [2024-12-10 05:53:53.167758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.356 [2024-12-10 05:53:53.167765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.356 [2024-12-10 05:53:53.167771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.356 [2024-12-10 05:53:53.167786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.356 qpair failed and we were unable to recover it. 00:29:05.356 [2024-12-10 05:53:53.177715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.356 [2024-12-10 05:53:53.177772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.356 [2024-12-10 05:53:53.177785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.356 [2024-12-10 05:53:53.177793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.356 [2024-12-10 05:53:53.177800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.356 [2024-12-10 05:53:53.177815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.356 qpair failed and we were unable to recover it. 00:29:05.356 [2024-12-10 05:53:53.187748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.356 [2024-12-10 05:53:53.187803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.356 [2024-12-10 05:53:53.187816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.356 [2024-12-10 05:53:53.187823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.356 [2024-12-10 05:53:53.187830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4758000b90 00:29:05.356 [2024-12-10 05:53:53.187844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:05.356 qpair failed and we were unable to recover it. 00:29:05.356 [2024-12-10 05:53:53.187946] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:05.356 A controller has encountered a failure and is being reset. 00:29:05.356 Controller properly reset. 00:29:05.615 Initializing NVMe Controllers 00:29:05.615 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:05.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:05.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:05.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:05.615 Initialization complete. Launching workers. 00:29:05.615 Starting thread on core 1 00:29:05.615 Starting thread on core 2 00:29:05.615 Starting thread on core 3 00:29:05.615 Starting thread on core 0 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:05.615 00:29:05.615 real 0m10.630s 00:29:05.615 user 0m19.409s 00:29:05.615 sys 0m4.689s 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.615 ************************************ 00:29:05.615 END TEST nvmf_target_disconnect_tc2 00:29:05.615 ************************************ 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.615 rmmod nvme_tcp 00:29:05.615 rmmod nvme_fabrics 00:29:05.615 rmmod nvme_keyring 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1351661 ']' 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1351661 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1351661 ']' 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1351661 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1351661 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1351661' 00:29:05.615 killing process with pid 1351661 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1351661 00:29:05.615 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1351661 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.874 05:53:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.778 05:53:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:08.037 00:29:08.037 real 0m19.391s 00:29:08.037 user 0m46.382s 00:29:08.037 sys 0m9.632s 00:29:08.037 05:53:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:08.037 05:53:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:08.037 ************************************ 00:29:08.037 END TEST nvmf_target_disconnect 00:29:08.037 ************************************ 00:29:08.037 05:53:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:08.037 00:29:08.037 real 5m50.233s 00:29:08.037 user 10m28.350s 00:29:08.037 sys 1m57.547s 00:29:08.037 05:53:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:08.037 05:53:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.037 ************************************ 00:29:08.037 END TEST nvmf_host 00:29:08.037 ************************************ 00:29:08.037 05:53:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:08.037 05:53:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:08.037 05:53:55 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:08.037 05:53:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:08.037 05:53:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:08.037 05:53:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:08.037 ************************************ 00:29:08.037 START TEST nvmf_target_core_interrupt_mode 00:29:08.037 ************************************ 00:29:08.037 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:08.037 * Looking for test storage... 00:29:08.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:08.037 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:08.037 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:29:08.037 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.296 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.297 --rc genhtml_branch_coverage=1 00:29:08.297 --rc genhtml_function_coverage=1 00:29:08.297 --rc genhtml_legend=1 00:29:08.297 --rc geninfo_all_blocks=1 00:29:08.297 --rc geninfo_unexecuted_blocks=1 00:29:08.297 00:29:08.297 ' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.297 --rc genhtml_branch_coverage=1 00:29:08.297 --rc genhtml_function_coverage=1 00:29:08.297 --rc genhtml_legend=1 00:29:08.297 --rc geninfo_all_blocks=1 00:29:08.297 --rc geninfo_unexecuted_blocks=1 00:29:08.297 00:29:08.297 ' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.297 --rc genhtml_branch_coverage=1 00:29:08.297 --rc genhtml_function_coverage=1 00:29:08.297 --rc genhtml_legend=1 00:29:08.297 --rc geninfo_all_blocks=1 00:29:08.297 --rc geninfo_unexecuted_blocks=1 00:29:08.297 00:29:08.297 ' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.297 --rc genhtml_branch_coverage=1 00:29:08.297 --rc genhtml_function_coverage=1 00:29:08.297 --rc genhtml_legend=1 00:29:08.297 --rc geninfo_all_blocks=1 00:29:08.297 --rc geninfo_unexecuted_blocks=1 00:29:08.297 00:29:08.297 ' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:08.297 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:08.297 ************************************ 00:29:08.297 START TEST nvmf_abort 00:29:08.297 ************************************ 00:29:08.297 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:08.297 * Looking for test storage... 00:29:08.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:08.297 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:08.297 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:29:08.297 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:08.297 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:08.297 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.297 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.297 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.297 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.298 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:08.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.558 --rc genhtml_branch_coverage=1 00:29:08.558 --rc genhtml_function_coverage=1 00:29:08.558 --rc genhtml_legend=1 00:29:08.558 --rc geninfo_all_blocks=1 00:29:08.558 --rc geninfo_unexecuted_blocks=1 00:29:08.558 00:29:08.558 ' 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:08.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.558 --rc genhtml_branch_coverage=1 00:29:08.558 --rc genhtml_function_coverage=1 00:29:08.558 --rc genhtml_legend=1 00:29:08.558 --rc geninfo_all_blocks=1 00:29:08.558 --rc geninfo_unexecuted_blocks=1 00:29:08.558 00:29:08.558 ' 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:08.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.558 --rc genhtml_branch_coverage=1 00:29:08.558 --rc genhtml_function_coverage=1 00:29:08.558 --rc genhtml_legend=1 00:29:08.558 --rc geninfo_all_blocks=1 00:29:08.558 --rc geninfo_unexecuted_blocks=1 00:29:08.558 00:29:08.558 ' 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:08.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.558 --rc genhtml_branch_coverage=1 00:29:08.558 --rc genhtml_function_coverage=1 00:29:08.558 --rc genhtml_legend=1 00:29:08.558 --rc geninfo_all_blocks=1 00:29:08.558 --rc geninfo_unexecuted_blocks=1 00:29:08.558 00:29:08.558 ' 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.558 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:08.559 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:15.127 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:15.127 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.127 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:15.128 Found net devices under 0000:af:00.0: cvl_0_0 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:15.128 Found net devices under 0000:af:00.1: cvl_0_1 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:15.128 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:15.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:15.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:29:15.128 00:29:15.128 --- 10.0.0.2 ping statistics --- 00:29:15.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.128 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:15.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:15.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:29:15.128 00:29:15.128 --- 10.0.0.1 ping statistics --- 00:29:15.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.128 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1356243 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1356243 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1356243 ']' 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.128 [2024-12-10 05:54:02.146804] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:15.128 [2024-12-10 05:54:02.147696] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:29:15.128 [2024-12-10 05:54:02.147730] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.128 [2024-12-10 05:54:02.225683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:15.128 [2024-12-10 05:54:02.268277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.128 [2024-12-10 05:54:02.268307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.128 [2024-12-10 05:54:02.268314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.128 [2024-12-10 05:54:02.268320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.128 [2024-12-10 05:54:02.268325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.128 [2024-12-10 05:54:02.269492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:15.128 [2024-12-10 05:54:02.269597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.128 [2024-12-10 05:54:02.269598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:15.128 [2024-12-10 05:54:02.337833] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:15.128 [2024-12-10 05:54:02.338794] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:15.128 [2024-12-10 05:54:02.338852] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:15.128 [2024-12-10 05:54:02.339033] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.128 [2024-12-10 05:54:02.406317] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.128 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.129 Malloc0 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.129 Delay0 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.129 [2024-12-10 05:54:02.490225] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.129 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:15.129 [2024-12-10 05:54:02.657335] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:17.031 Initializing NVMe Controllers 00:29:17.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:17.031 controller IO queue size 128 less than required 00:29:17.031 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:17.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:17.031 Initialization complete. Launching workers. 00:29:17.031 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38037 00:29:17.031 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38094, failed to submit 66 00:29:17.031 success 38037, unsuccessful 57, failed 0 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:17.031 rmmod nvme_tcp 00:29:17.031 rmmod nvme_fabrics 00:29:17.031 rmmod nvme_keyring 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1356243 ']' 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1356243 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1356243 ']' 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1356243 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1356243 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1356243' 00:29:17.031 killing process with pid 1356243 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1356243 00:29:17.031 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1356243 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.291 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.824 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:19.824 00:29:19.824 real 0m11.121s 00:29:19.824 user 0m10.595s 00:29:19.824 sys 0m5.665s 00:29:19.824 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.824 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.824 ************************************ 00:29:19.824 END TEST nvmf_abort 00:29:19.824 ************************************ 00:29:19.824 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:19.824 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:19.824 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.824 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:19.824 ************************************ 00:29:19.824 START TEST nvmf_ns_hotplug_stress 00:29:19.824 ************************************ 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:19.825 * Looking for test storage... 00:29:19.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:19.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.825 --rc genhtml_branch_coverage=1 00:29:19.825 --rc genhtml_function_coverage=1 00:29:19.825 --rc genhtml_legend=1 00:29:19.825 --rc geninfo_all_blocks=1 00:29:19.825 --rc geninfo_unexecuted_blocks=1 00:29:19.825 00:29:19.825 ' 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:19.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.825 --rc genhtml_branch_coverage=1 00:29:19.825 --rc genhtml_function_coverage=1 00:29:19.825 --rc genhtml_legend=1 00:29:19.825 --rc geninfo_all_blocks=1 00:29:19.825 --rc geninfo_unexecuted_blocks=1 00:29:19.825 00:29:19.825 ' 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:19.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.825 --rc genhtml_branch_coverage=1 00:29:19.825 --rc genhtml_function_coverage=1 00:29:19.825 --rc genhtml_legend=1 00:29:19.825 --rc geninfo_all_blocks=1 00:29:19.825 --rc geninfo_unexecuted_blocks=1 00:29:19.825 00:29:19.825 ' 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:19.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.825 --rc genhtml_branch_coverage=1 00:29:19.825 --rc genhtml_function_coverage=1 00:29:19.825 --rc genhtml_legend=1 00:29:19.825 --rc geninfo_all_blocks=1 00:29:19.825 --rc geninfo_unexecuted_blocks=1 00:29:19.825 00:29:19.825 ' 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:19.825 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:19.826 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.099 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:25.359 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:25.359 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:25.359 Found net devices under 0000:af:00.0: cvl_0_0 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:25.359 Found net devices under 0000:af:00.1: cvl_0_1 00:29:25.359 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.359 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:29:25.360 00:29:25.360 --- 10.0.0.2 ping statistics --- 00:29:25.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.360 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:29:25.360 00:29:25.360 --- 10.0.0.1 ping statistics --- 00:29:25.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.360 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.360 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1360552 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1360552 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1360552 ']' 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.619 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:25.619 [2024-12-10 05:54:13.324782] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:25.619 [2024-12-10 05:54:13.325669] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:29:25.619 [2024-12-10 05:54:13.325701] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.619 [2024-12-10 05:54:13.402385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:25.619 [2024-12-10 05:54:13.442358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.619 [2024-12-10 05:54:13.442394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.619 [2024-12-10 05:54:13.442401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.619 [2024-12-10 05:54:13.442407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.619 [2024-12-10 05:54:13.442412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.619 [2024-12-10 05:54:13.443745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.619 [2024-12-10 05:54:13.443850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.619 [2024-12-10 05:54:13.443851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.878 [2024-12-10 05:54:13.511799] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:25.878 [2024-12-10 05:54:13.512547] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:25.878 [2024-12-10 05:54:13.512918] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:25.878 [2024-12-10 05:54:13.513030] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:25.878 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.878 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:25.878 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.878 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.878 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:25.878 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.878 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:25.878 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:25.878 [2024-12-10 05:54:13.748543] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.137 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:26.137 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.396 [2024-12-10 05:54:14.140951] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.396 05:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.655 05:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:26.913 Malloc0 00:29:26.913 05:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:26.913 Delay0 00:29:26.913 05:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.172 05:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:27.429 NULL1 00:29:27.429 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:27.687 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1360999 00:29:27.687 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:27.687 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:27.687 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.687 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.945 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:27.945 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:28.204 true 00:29:28.204 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:28.204 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.463 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.721 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:28.722 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:28.722 true 00:29:28.980 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:28.980 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.980 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.239 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:29.239 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:29.497 true 00:29:29.497 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:29.497 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.756 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.015 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:30.015 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:30.273 true 00:29:30.273 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:30.273 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.273 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.531 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:30.531 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:30.789 true 00:29:30.789 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:30.789 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.048 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.307 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:31.307 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:31.565 true 00:29:31.565 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:31.565 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.565 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.823 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:31.823 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:32.082 true 00:29:32.082 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:32.082 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.341 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.599 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:32.599 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:32.858 true 00:29:32.858 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:32.858 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.858 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.116 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:33.116 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:33.375 true 00:29:33.375 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:33.375 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.633 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.892 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:33.892 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:34.150 true 00:29:34.150 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:34.150 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.150 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.408 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:34.408 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:34.666 true 00:29:34.666 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:34.666 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.924 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:35.183 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:35.183 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:35.441 true 00:29:35.441 05:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:35.441 05:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.441 05:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:35.699 05:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:35.699 05:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:35.958 true 00:29:35.958 05:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:35.958 05:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.216 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:36.474 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:36.474 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:36.733 true 00:29:36.733 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:36.733 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.991 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:36.991 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:36.991 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:37.249 true 00:29:37.249 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:37.249 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.507 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.765 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:37.765 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:38.023 true 00:29:38.023 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:38.023 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.282 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:38.541 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:38.541 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:38.541 true 00:29:38.541 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:38.541 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.799 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.058 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:39.058 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:39.316 true 00:29:39.316 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:39.316 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.575 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.833 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:39.833 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:39.833 true 00:29:39.833 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:39.833 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.091 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:40.350 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:40.350 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:40.608 true 00:29:40.608 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:40.608 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.867 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.124 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:41.124 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:41.124 true 00:29:41.124 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:41.124 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.383 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.641 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:41.641 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:41.900 true 00:29:41.900 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:41.900 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.158 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.417 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:42.417 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:42.417 true 00:29:42.417 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:42.417 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.675 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.934 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:42.934 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:43.192 true 00:29:43.192 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:43.192 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.451 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.709 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:43.709 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:43.709 true 00:29:43.709 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:43.709 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.967 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.226 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:44.226 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:44.484 true 00:29:44.484 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:44.484 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.743 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.001 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:45.001 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:45.001 true 00:29:45.001 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:45.001 05:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.259 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.518 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:45.518 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:45.776 true 00:29:45.776 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:45.776 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.035 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.293 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:46.293 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:46.293 true 00:29:46.293 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:46.293 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.551 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.809 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:29:46.809 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:29:47.068 true 00:29:47.068 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:47.068 05:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.333 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.593 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:29:47.593 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:29:47.593 true 00:29:47.851 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:47.851 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.851 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.110 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:29:48.110 05:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:29:48.368 true 00:29:48.368 05:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:48.368 05:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.627 05:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.885 05:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:29:48.885 05:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:29:48.885 true 00:29:49.143 05:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:49.143 05:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.143 05:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.401 05:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:29:49.401 05:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:29:49.659 true 00:29:49.659 05:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:49.659 05:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.917 05:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.175 05:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:29:50.175 05:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:29:50.433 true 00:29:50.433 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:50.433 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.692 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.692 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:29:50.692 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:29:50.950 true 00:29:50.950 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:50.950 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.208 05:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.467 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:29:51.467 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:29:51.726 true 00:29:51.726 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:51.726 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.726 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.984 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:29:51.984 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:29:52.243 true 00:29:52.243 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:52.243 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.501 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.759 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:29:52.759 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:29:53.018 true 00:29:53.018 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:53.018 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.275 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.276 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:29:53.276 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:29:53.534 true 00:29:53.534 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:53.534 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.792 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.051 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:29:54.051 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:29:54.309 true 00:29:54.309 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:54.309 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.566 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.566 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:29:54.566 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:29:54.824 true 00:29:54.824 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:54.824 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.083 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.393 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:29:55.393 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:29:55.687 true 00:29:55.687 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:55.687 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.687 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.993 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:29:55.993 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:29:56.251 true 00:29:56.251 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:56.251 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.510 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.510 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:29:56.510 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:29:56.769 true 00:29:56.769 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:56.769 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.027 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.286 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:29:57.286 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:29:57.543 true 00:29:57.543 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:57.543 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.801 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.802 Initializing NVMe Controllers 00:29:57.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.802 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:29:57.802 Controller IO queue size 128, less than required. 00:29:57.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.802 WARNING: Some requested NVMe devices were skipped 00:29:57.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:57.802 Initialization complete. Launching workers. 00:29:57.802 ======================================================== 00:29:57.802 Latency(us) 00:29:57.802 Device Information : IOPS MiB/s Average min max 00:29:57.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28208.67 13.77 4537.60 1631.60 8607.99 00:29:57.802 ======================================================== 00:29:57.802 Total : 28208.67 13.77 4537.60 1631.60 8607.99 00:29:57.802 00:29:57.802 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:29:57.802 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:29:58.059 true 00:29:58.059 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360999 00:29:58.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1360999) - No such process 00:29:58.059 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1360999 00:29:58.059 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.318 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:58.576 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:58.576 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:58.576 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:58.576 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:58.576 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:58.576 null0 00:29:58.835 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:58.835 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:58.835 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:58.835 null1 00:29:58.835 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:58.835 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:58.835 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:59.094 null2 00:29:59.094 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:59.094 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:59.094 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:59.353 null3 00:29:59.353 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:59.353 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:59.353 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:59.353 null4 00:29:59.353 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:59.353 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:59.353 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:59.612 null5 00:29:59.612 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:59.612 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:59.612 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:59.871 null6 00:29:59.871 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:59.871 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:59.871 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:00.131 null7 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:00.131 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:00.132 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:00.132 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1366273 1366275 1366276 1366278 1366280 1366282 1366284 1366286 00:30:00.132 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:00.132 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.132 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:00.132 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.132 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:00.132 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:00.132 05:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:00.132 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:00.132 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:00.132 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:00.132 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.391 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:00.650 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.650 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:00.650 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:00.650 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:00.650 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:00.650 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:00.650 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:00.650 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:00.910 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:01.169 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.169 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:01.169 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:01.169 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:01.169 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:01.169 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:01.169 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:01.169 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:01.169 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.169 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.169 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:01.169 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.169 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.169 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:01.169 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.169 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.170 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:01.429 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:01.429 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.429 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:01.429 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:01.429 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:01.429 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:01.429 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:01.429 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:01.688 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:01.947 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:01.947 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:01.947 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.947 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:01.947 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:01.947 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:01.947 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:01.947 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:02.206 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.207 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.207 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:02.207 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:02.207 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:02.207 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:02.207 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:02.207 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:02.207 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:02.207 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:02.207 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.466 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:02.726 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:02.726 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:02.726 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:02.726 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:02.726 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.726 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:02.726 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:02.726 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:02.985 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:03.244 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:03.244 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:03.244 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:03.244 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:03.244 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:03.244 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:03.244 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:03.244 05:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.244 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.244 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.244 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:03.244 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.244 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.244 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:03.503 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:03.504 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:03.504 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:03.504 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:03.504 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:03.762 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.762 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:03.763 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:04.022 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:04.022 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:04.022 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:04.022 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:04.022 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.022 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:04.022 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:04.022 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.281 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.281 rmmod nvme_tcp 00:30:04.281 rmmod nvme_fabrics 00:30:04.281 rmmod nvme_keyring 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1360552 ']' 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1360552 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1360552 ']' 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1360552 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1360552 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1360552' 00:30:04.281 killing process with pid 1360552 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1360552 00:30:04.281 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1360552 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.541 05:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:07.076 00:30:07.076 real 0m47.132s 00:30:07.076 user 3m1.998s 00:30:07.076 sys 0m21.394s 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:07.076 ************************************ 00:30:07.076 END TEST nvmf_ns_hotplug_stress 00:30:07.076 ************************************ 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:07.076 ************************************ 00:30:07.076 START TEST nvmf_delete_subsystem 00:30:07.076 ************************************ 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:07.076 * Looking for test storage... 00:30:07.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:07.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.076 --rc genhtml_branch_coverage=1 00:30:07.076 --rc genhtml_function_coverage=1 00:30:07.076 --rc genhtml_legend=1 00:30:07.076 --rc geninfo_all_blocks=1 00:30:07.076 --rc geninfo_unexecuted_blocks=1 00:30:07.076 00:30:07.076 ' 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:07.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.076 --rc genhtml_branch_coverage=1 00:30:07.076 --rc genhtml_function_coverage=1 00:30:07.076 --rc genhtml_legend=1 00:30:07.076 --rc geninfo_all_blocks=1 00:30:07.076 --rc geninfo_unexecuted_blocks=1 00:30:07.076 00:30:07.076 ' 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:07.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.076 --rc genhtml_branch_coverage=1 00:30:07.076 --rc genhtml_function_coverage=1 00:30:07.076 --rc genhtml_legend=1 00:30:07.076 --rc geninfo_all_blocks=1 00:30:07.076 --rc geninfo_unexecuted_blocks=1 00:30:07.076 00:30:07.076 ' 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:07.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.076 --rc genhtml_branch_coverage=1 00:30:07.076 --rc genhtml_function_coverage=1 00:30:07.076 --rc genhtml_legend=1 00:30:07.076 --rc geninfo_all_blocks=1 00:30:07.076 --rc geninfo_unexecuted_blocks=1 00:30:07.076 00:30:07.076 ' 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.076 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:07.077 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:12.358 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:12.358 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:12.358 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:12.359 Found net devices under 0000:af:00.0: cvl_0_0 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:12.359 Found net devices under 0000:af:00.1: cvl_0_1 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.359 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:12.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:30:12.618 00:30:12.618 --- 10.0.0.2 ping statistics --- 00:30:12.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.618 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:30:12.618 00:30:12.618 --- 10.0.0.1 ping statistics --- 00:30:12.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.618 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:12.618 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:12.878 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1370566 00:30:12.878 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1370566 00:30:12.878 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:12.878 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1370566 ']' 00:30:12.878 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.878 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.878 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.878 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.878 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:12.878 [2024-12-10 05:55:00.565232] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:12.878 [2024-12-10 05:55:00.566181] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:30:12.878 [2024-12-10 05:55:00.566216] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.878 [2024-12-10 05:55:00.646095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:12.878 [2024-12-10 05:55:00.684741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.878 [2024-12-10 05:55:00.684783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.878 [2024-12-10 05:55:00.684790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.878 [2024-12-10 05:55:00.684796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.878 [2024-12-10 05:55:00.684801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.878 [2024-12-10 05:55:00.685853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.878 [2024-12-10 05:55:00.685854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.878 [2024-12-10 05:55:00.753850] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:12.878 [2024-12-10 05:55:00.754361] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:12.878 [2024-12-10 05:55:00.754616] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:13.137 [2024-12-10 05:55:00.834654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:13.137 [2024-12-10 05:55:00.858950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:13.137 NULL1 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:13.137 Delay0 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.137 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:13.138 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.138 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1370590 00:30:13.138 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:13.138 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:13.138 [2024-12-10 05:55:00.971982] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:15.050 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.050 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.050 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 [2024-12-10 05:55:03.100550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5b40 is same with the state(6) to be set 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 starting I/O failed: -6 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 [2024-12-10 05:55:03.102702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efd08000c80 is same with the state(6) to be set 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Read completed with error (sct=0, sc=8) 00:30:15.310 Write completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Write completed with error (sct=0, sc=8) 00:30:15.311 Write completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Write completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Write completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:15.311 Read completed with error (sct=0, sc=8) 00:30:16.247 [2024-12-10 05:55:04.066075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f69b0 is same with the state(6) to be set 00:30:16.247 Read completed with error (sct=0, sc=8) 00:30:16.247 Read completed with error (sct=0, sc=8) 00:30:16.247 Read completed with error (sct=0, sc=8) 00:30:16.247 Write completed with error (sct=0, sc=8) 00:30:16.247 Write completed with error (sct=0, sc=8) 00:30:16.247 Read completed with error (sct=0, sc=8) 00:30:16.247 Read completed with error (sct=0, sc=8) 00:30:16.247 Read completed with error (sct=0, sc=8) 00:30:16.247 Read completed with error (sct=0, sc=8) 00:30:16.247 Write completed with error (sct=0, sc=8) 00:30:16.247 Read completed with error (sct=0, sc=8) 00:30:16.247 Read completed with error (sct=0, sc=8) 00:30:16.247 Write completed with error (sct=0, sc=8) 00:30:16.247 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 [2024-12-10 05:55:04.104256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f52c0 is same with the state(6) to be set 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 [2024-12-10 05:55:04.104896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5960 is same with the state(6) to be set 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 [2024-12-10 05:55:04.105368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efd0800d060 is same with the state(6) to be set 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Write completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 Read completed with error (sct=0, sc=8) 00:30:16.248 [2024-12-10 05:55:04.105771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efd0800d800 is same with the state(6) to be set 00:30:16.248 Initializing NVMe Controllers 00:30:16.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:16.248 Controller IO queue size 128, less than required. 00:30:16.248 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:16.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:16.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:16.248 Initialization complete. Launching workers. 00:30:16.248 ======================================================== 00:30:16.248 Latency(us) 00:30:16.248 Device Information : IOPS MiB/s Average min max 00:30:16.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.44 0.08 889044.03 285.85 1009978.25 00:30:16.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.55 0.08 966627.67 230.00 2001541.73 00:30:16.248 ======================================================== 00:30:16.248 Total : 326.98 0.16 925713.50 230.00 2001541.73 00:30:16.248 00:30:16.248 [2024-12-10 05:55:04.106333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f69b0 (9): Bad file descriptor 00:30:16.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:16.248 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.248 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:16.248 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1370590 00:30:16.248 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1370590 00:30:16.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1370590) - No such process 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1370590 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1370590 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1370590 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.817 [2024-12-10 05:55:04.634970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1371258 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371258 00:30:16.817 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:17.075 [2024-12-10 05:55:04.721534] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:17.334 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:17.334 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371258 00:30:17.334 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:17.901 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:17.901 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371258 00:30:17.901 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:18.468 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:18.468 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371258 00:30:18.468 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:19.036 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:19.036 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371258 00:30:19.036 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:19.294 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:19.294 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371258 00:30:19.294 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:19.861 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:19.861 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371258 00:30:19.861 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:20.120 Initializing NVMe Controllers 00:30:20.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:20.120 Controller IO queue size 128, less than required. 00:30:20.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:20.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:20.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:20.120 Initialization complete. Launching workers. 00:30:20.120 ======================================================== 00:30:20.120 Latency(us) 00:30:20.120 Device Information : IOPS MiB/s Average min max 00:30:20.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003862.70 1000138.52 1041990.65 00:30:20.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003999.51 1000320.67 1010064.44 00:30:20.120 ======================================================== 00:30:20.120 Total : 256.00 0.12 1003931.10 1000138.52 1041990.65 00:30:20.120 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371258 00:30:20.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1371258) - No such process 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1371258 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.379 rmmod nvme_tcp 00:30:20.379 rmmod nvme_fabrics 00:30:20.379 rmmod nvme_keyring 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1370566 ']' 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1370566 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1370566 ']' 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1370566 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.379 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1370566 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1370566' 00:30:20.638 killing process with pid 1370566 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1370566 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1370566 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.638 05:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.174 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.174 00:30:23.174 real 0m16.108s 00:30:23.174 user 0m26.211s 00:30:23.174 sys 0m5.976s 00:30:23.174 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.174 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:23.174 ************************************ 00:30:23.174 END TEST nvmf_delete_subsystem 00:30:23.174 ************************************ 00:30:23.174 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:23.174 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:23.174 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:23.175 ************************************ 00:30:23.175 START TEST nvmf_host_management 00:30:23.175 ************************************ 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:23.175 * Looking for test storage... 00:30:23.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:23.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.175 --rc genhtml_branch_coverage=1 00:30:23.175 --rc genhtml_function_coverage=1 00:30:23.175 --rc genhtml_legend=1 00:30:23.175 --rc geninfo_all_blocks=1 00:30:23.175 --rc geninfo_unexecuted_blocks=1 00:30:23.175 00:30:23.175 ' 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:23.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.175 --rc genhtml_branch_coverage=1 00:30:23.175 --rc genhtml_function_coverage=1 00:30:23.175 --rc genhtml_legend=1 00:30:23.175 --rc geninfo_all_blocks=1 00:30:23.175 --rc geninfo_unexecuted_blocks=1 00:30:23.175 00:30:23.175 ' 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:23.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.175 --rc genhtml_branch_coverage=1 00:30:23.175 --rc genhtml_function_coverage=1 00:30:23.175 --rc genhtml_legend=1 00:30:23.175 --rc geninfo_all_blocks=1 00:30:23.175 --rc geninfo_unexecuted_blocks=1 00:30:23.175 00:30:23.175 ' 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:23.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.175 --rc genhtml_branch_coverage=1 00:30:23.175 --rc genhtml_function_coverage=1 00:30:23.175 --rc genhtml_legend=1 00:30:23.175 --rc geninfo_all_blocks=1 00:30:23.175 --rc geninfo_unexecuted_blocks=1 00:30:23.175 00:30:23.175 ' 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.175 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.176 05:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:29.748 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:29.748 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:29.748 Found net devices under 0000:af:00.0: cvl_0_0 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.748 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:29.749 Found net devices under 0000:af:00.1: cvl_0_1 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:29.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:30:29.749 00:30:29.749 --- 10.0.0.2 ping statistics --- 00:30:29.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.749 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:30:29.749 00:30:29.749 --- 10.0.0.1 ping statistics --- 00:30:29.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.749 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1375187 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1375187 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1375187 ']' 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.749 [2024-12-10 05:55:16.724649] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:29.749 [2024-12-10 05:55:16.725544] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:30:29.749 [2024-12-10 05:55:16.725576] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.749 [2024-12-10 05:55:16.803077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:29.749 [2024-12-10 05:55:16.844473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.749 [2024-12-10 05:55:16.844512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.749 [2024-12-10 05:55:16.844521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.749 [2024-12-10 05:55:16.844529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.749 [2024-12-10 05:55:16.844535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.749 [2024-12-10 05:55:16.845928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:29.749 [2024-12-10 05:55:16.846038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:29.749 [2024-12-10 05:55:16.846147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.749 [2024-12-10 05:55:16.846148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:29.749 [2024-12-10 05:55:16.914638] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:29.749 [2024-12-10 05:55:16.915490] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:29.749 [2024-12-10 05:55:16.915739] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:29.749 [2024-12-10 05:55:16.916126] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:29.749 [2024-12-10 05:55:16.916177] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.749 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.749 [2024-12-10 05:55:16.978932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.749 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.749 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:29.749 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:29.749 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.750 Malloc0 00:30:29.750 [2024-12-10 05:55:17.075251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1375308 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1375308 /var/tmp/bdevperf.sock 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1375308 ']' 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:29.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:29.750 { 00:30:29.750 "params": { 00:30:29.750 "name": "Nvme$subsystem", 00:30:29.750 "trtype": "$TEST_TRANSPORT", 00:30:29.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.750 "adrfam": "ipv4", 00:30:29.750 "trsvcid": "$NVMF_PORT", 00:30:29.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.750 "hdgst": ${hdgst:-false}, 00:30:29.750 "ddgst": ${ddgst:-false} 00:30:29.750 }, 00:30:29.750 "method": "bdev_nvme_attach_controller" 00:30:29.750 } 00:30:29.750 EOF 00:30:29.750 )") 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:29.750 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:29.750 "params": { 00:30:29.750 "name": "Nvme0", 00:30:29.750 "trtype": "tcp", 00:30:29.750 "traddr": "10.0.0.2", 00:30:29.750 "adrfam": "ipv4", 00:30:29.750 "trsvcid": "4420", 00:30:29.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:29.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:29.750 "hdgst": false, 00:30:29.750 "ddgst": false 00:30:29.750 }, 00:30:29.750 "method": "bdev_nvme_attach_controller" 00:30:29.750 }' 00:30:29.750 [2024-12-10 05:55:17.172709] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:30:29.750 [2024-12-10 05:55:17.172755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375308 ] 00:30:29.750 [2024-12-10 05:55:17.249429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.750 [2024-12-10 05:55:17.289012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.750 Running I/O for 10 seconds... 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:30:30.009 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:30.270 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:30.270 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:30.270 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:30.270 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:30.270 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.270 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:30.270 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.270 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:30:30.270 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:30:30.270 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:30.270 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:30.270 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:30.270 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:30.270 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.270 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:30.270 [2024-12-10 05:55:18.012251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.270 [2024-12-10 05:55:18.012295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.270 [2024-12-10 05:55:18.012311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.270 [2024-12-10 05:55:18.012319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.270 [2024-12-10 05:55:18.012326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.270 [2024-12-10 05:55:18.012333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.270 [2024-12-10 05:55:18.012340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.270 [2024-12-10 05:55:18.012346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.270 [2024-12-10 05:55:18.012353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21457e0 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.270 [2024-12-10 05:55:18.013211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4730 is same with the state(6) to be set 00:30:30.271 [2024-12-10 05:55:18.013569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.271 [2024-12-10 05:55:18.013809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.271 [2024-12-10 05:55:18.013816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.013992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.013998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.272 [2024-12-10 05:55:18.014322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.272 [2024-12-10 05:55:18.014329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.273 [2024-12-10 05:55:18.014528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.014535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235e770 is same with the state(6) to be set 00:30:30.273 [2024-12-10 05:55:18.015487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:30.273 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.273 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:30.273 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.273 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:30.273 task offset: 98304 on job bdev=Nvme0n1 fails 00:30:30.273 00:30:30.273 Latency(us) 00:30:30.273 [2024-12-10T04:55:18.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.273 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.273 Job: Nvme0n1 ended in about 0.41 seconds with error 00:30:30.273 Verification LBA range: start 0x0 length 0x400 00:30:30.273 Nvme0n1 : 0.41 1892.25 118.27 157.69 0.00 30400.99 3464.05 27088.21 00:30:30.273 [2024-12-10T04:55:18.169Z] =================================================================================================================== 00:30:30.273 [2024-12-10T04:55:18.169Z] Total : 1892.25 118.27 157.69 0.00 30400.99 3464.05 27088.21 00:30:30.273 [2024-12-10 05:55:18.017825] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:30.273 [2024-12-10 05:55:18.017846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21457e0 (9): Bad file descriptor 00:30:30.273 [2024-12-10 05:55:18.018805] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:30.273 [2024-12-10 05:55:18.018874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:30.273 [2024-12-10 05:55:18.018896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.273 [2024-12-10 05:55:18.018907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:30.273 [2024-12-10 05:55:18.018914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:30.273 [2024-12-10 05:55:18.018920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.273 [2024-12-10 05:55:18.018927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21457e0 00:30:30.273 [2024-12-10 05:55:18.018945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21457e0 (9): Bad file descriptor 00:30:30.273 [2024-12-10 05:55:18.018956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:30.273 [2024-12-10 05:55:18.018963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:30.273 [2024-12-10 05:55:18.018971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:30.273 [2024-12-10 05:55:18.018979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:30.273 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.273 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1375308 00:30:31.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1375308) - No such process 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:31.210 { 00:30:31.210 "params": { 00:30:31.210 "name": "Nvme$subsystem", 00:30:31.210 "trtype": "$TEST_TRANSPORT", 00:30:31.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.210 "adrfam": "ipv4", 00:30:31.210 "trsvcid": "$NVMF_PORT", 00:30:31.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.210 "hdgst": ${hdgst:-false}, 00:30:31.210 "ddgst": ${ddgst:-false} 00:30:31.210 }, 00:30:31.210 "method": "bdev_nvme_attach_controller" 00:30:31.210 } 00:30:31.210 EOF 00:30:31.210 )") 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:31.210 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:31.210 "params": { 00:30:31.210 "name": "Nvme0", 00:30:31.210 "trtype": "tcp", 00:30:31.210 "traddr": "10.0.0.2", 00:30:31.210 "adrfam": "ipv4", 00:30:31.210 "trsvcid": "4420", 00:30:31.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:31.210 "hdgst": false, 00:30:31.210 "ddgst": false 00:30:31.210 }, 00:30:31.210 "method": "bdev_nvme_attach_controller" 00:30:31.210 }' 00:30:31.210 [2024-12-10 05:55:19.081886] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:30:31.210 [2024-12-10 05:55:19.081935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375682 ] 00:30:31.469 [2024-12-10 05:55:19.155478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.469 [2024-12-10 05:55:19.193419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.727 Running I/O for 1 seconds... 00:30:32.662 1984.00 IOPS, 124.00 MiB/s 00:30:32.662 Latency(us) 00:30:32.662 [2024-12-10T04:55:20.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.662 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.662 Verification LBA range: start 0x0 length 0x400 00:30:32.662 Nvme0n1 : 1.01 2036.73 127.30 0.00 0.00 30933.10 4213.03 27462.70 00:30:32.662 [2024-12-10T04:55:20.558Z] =================================================================================================================== 00:30:32.662 [2024-12-10T04:55:20.558Z] Total : 2036.73 127.30 0.00 0.00 30933.10 4213.03 27462.70 00:30:32.921 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:32.922 rmmod nvme_tcp 00:30:32.922 rmmod nvme_fabrics 00:30:32.922 rmmod nvme_keyring 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1375187 ']' 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1375187 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1375187 ']' 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1375187 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1375187 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1375187' 00:30:32.922 killing process with pid 1375187 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1375187 00:30:32.922 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1375187 00:30:33.181 [2024-12-10 05:55:20.926490] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.181 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:35.719 00:30:35.719 real 0m12.417s 00:30:35.719 user 0m18.549s 00:30:35.719 sys 0m6.271s 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:35.719 ************************************ 00:30:35.719 END TEST nvmf_host_management 00:30:35.719 ************************************ 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:35.719 ************************************ 00:30:35.719 START TEST nvmf_lvol 00:30:35.719 ************************************ 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:35.719 * Looking for test storage... 00:30:35.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:35.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.719 --rc genhtml_branch_coverage=1 00:30:35.719 --rc genhtml_function_coverage=1 00:30:35.719 --rc genhtml_legend=1 00:30:35.719 --rc geninfo_all_blocks=1 00:30:35.719 --rc geninfo_unexecuted_blocks=1 00:30:35.719 00:30:35.719 ' 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:35.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.719 --rc genhtml_branch_coverage=1 00:30:35.719 --rc genhtml_function_coverage=1 00:30:35.719 --rc genhtml_legend=1 00:30:35.719 --rc geninfo_all_blocks=1 00:30:35.719 --rc geninfo_unexecuted_blocks=1 00:30:35.719 00:30:35.719 ' 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:35.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.719 --rc genhtml_branch_coverage=1 00:30:35.719 --rc genhtml_function_coverage=1 00:30:35.719 --rc genhtml_legend=1 00:30:35.719 --rc geninfo_all_blocks=1 00:30:35.719 --rc geninfo_unexecuted_blocks=1 00:30:35.719 00:30:35.719 ' 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:35.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.719 --rc genhtml_branch_coverage=1 00:30:35.719 --rc genhtml_function_coverage=1 00:30:35.719 --rc genhtml_legend=1 00:30:35.719 --rc geninfo_all_blocks=1 00:30:35.719 --rc geninfo_unexecuted_blocks=1 00:30:35.719 00:30:35.719 ' 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.719 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:35.720 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.290 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:42.291 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:42.291 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:42.291 Found net devices under 0000:af:00.0: cvl_0_0 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:42.291 Found net devices under 0000:af:00.1: cvl_0_1 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.291 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:30:42.291 00:30:42.291 --- 10.0.0.2 ping statistics --- 00:30:42.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.291 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:42.291 00:30:42.291 --- 10.0.0.1 ping statistics --- 00:30:42.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.291 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1379379 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1379379 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1379379 ']' 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.291 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:42.292 [2024-12-10 05:55:29.237567] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:42.292 [2024-12-10 05:55:29.238487] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:30:42.292 [2024-12-10 05:55:29.238519] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.292 [2024-12-10 05:55:29.317641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:42.292 [2024-12-10 05:55:29.357817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.292 [2024-12-10 05:55:29.357854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.292 [2024-12-10 05:55:29.357862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.292 [2024-12-10 05:55:29.357868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.292 [2024-12-10 05:55:29.357872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.292 [2024-12-10 05:55:29.359110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.292 [2024-12-10 05:55:29.359145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.292 [2024-12-10 05:55:29.359146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.292 [2024-12-10 05:55:29.425717] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:42.292 [2024-12-10 05:55:29.426491] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:42.292 [2024-12-10 05:55:29.426956] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:42.292 [2024-12-10 05:55:29.427062] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:42.292 [2024-12-10 05:55:29.659959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:42.292 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:42.292 05:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:42.292 05:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:42.551 05:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:42.810 05:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d3b66d4d-c872-40f4-874c-78758d2cfa02 00:30:42.810 05:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d3b66d4d-c872-40f4-874c-78758d2cfa02 lvol 20 00:30:43.069 05:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3ae44752-818a-4c8a-80bd-db16b4f31197 00:30:43.069 05:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:43.069 05:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3ae44752-818a-4c8a-80bd-db16b4f31197 00:30:43.328 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:43.587 [2024-12-10 05:55:31.319862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.587 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:43.846 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1379823 00:30:43.846 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:43.846 05:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:44.785 05:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3ae44752-818a-4c8a-80bd-db16b4f31197 MY_SNAPSHOT 00:30:45.064 05:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2804a5e3-4b65-4a8a-afb1-0e175a063a6f 00:30:45.064 05:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3ae44752-818a-4c8a-80bd-db16b4f31197 30 00:30:45.349 05:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2804a5e3-4b65-4a8a-afb1-0e175a063a6f MY_CLONE 00:30:45.628 05:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=42aeebf9-8287-495c-ae6c-ab367e6b28f8 00:30:45.628 05:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 42aeebf9-8287-495c-ae6c-ab367e6b28f8 00:30:45.887 05:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1379823 00:30:55.860 Initializing NVMe Controllers 00:30:55.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:55.860 Controller IO queue size 128, less than required. 00:30:55.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:55.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:55.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:55.860 Initialization complete. Launching workers. 00:30:55.860 ======================================================== 00:30:55.860 Latency(us) 00:30:55.860 Device Information : IOPS MiB/s Average min max 00:30:55.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12651.02 49.42 10119.03 1537.19 49203.17 00:30:55.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12511.82 48.87 10235.24 2574.16 39109.52 00:30:55.860 ======================================================== 00:30:55.860 Total : 25162.83 98.29 10176.81 1537.19 49203.17 00:30:55.860 00:30:55.860 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3ae44752-818a-4c8a-80bd-db16b4f31197 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d3b66d4d-c872-40f4-874c-78758d2cfa02 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:55.860 rmmod nvme_tcp 00:30:55.860 rmmod nvme_fabrics 00:30:55.860 rmmod nvme_keyring 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1379379 ']' 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1379379 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1379379 ']' 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1379379 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379379 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379379' 00:30:55.860 killing process with pid 1379379 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1379379 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1379379 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.860 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.239 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.239 00:30:57.239 real 0m21.813s 00:30:57.239 user 0m55.718s 00:30:57.239 sys 0m9.659s 00:30:57.239 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.239 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:57.239 ************************************ 00:30:57.239 END TEST nvmf_lvol 00:30:57.239 ************************************ 00:30:57.239 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:57.239 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:57.239 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.239 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:57.239 ************************************ 00:30:57.239 START TEST nvmf_lvs_grow 00:30:57.239 ************************************ 00:30:57.239 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:57.239 * Looking for test storage... 00:30:57.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:57.239 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:57.239 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:30:57.239 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:57.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.498 --rc genhtml_branch_coverage=1 00:30:57.498 --rc genhtml_function_coverage=1 00:30:57.498 --rc genhtml_legend=1 00:30:57.498 --rc geninfo_all_blocks=1 00:30:57.498 --rc geninfo_unexecuted_blocks=1 00:30:57.498 00:30:57.498 ' 00:30:57.498 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:57.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.499 --rc genhtml_branch_coverage=1 00:30:57.499 --rc genhtml_function_coverage=1 00:30:57.499 --rc genhtml_legend=1 00:30:57.499 --rc geninfo_all_blocks=1 00:30:57.499 --rc geninfo_unexecuted_blocks=1 00:30:57.499 00:30:57.499 ' 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:57.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.499 --rc genhtml_branch_coverage=1 00:30:57.499 --rc genhtml_function_coverage=1 00:30:57.499 --rc genhtml_legend=1 00:30:57.499 --rc geninfo_all_blocks=1 00:30:57.499 --rc geninfo_unexecuted_blocks=1 00:30:57.499 00:30:57.499 ' 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:57.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.499 --rc genhtml_branch_coverage=1 00:30:57.499 --rc genhtml_function_coverage=1 00:30:57.499 --rc genhtml_legend=1 00:30:57.499 --rc geninfo_all_blocks=1 00:30:57.499 --rc geninfo_unexecuted_blocks=1 00:30:57.499 00:30:57.499 ' 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:57.499 05:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:04.074 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:04.074 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:04.074 Found net devices under 0000:af:00.0: cvl_0_0 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.074 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:04.075 Found net devices under 0000:af:00.1: cvl_0_1 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.075 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:31:04.075 00:31:04.075 --- 10.0.0.2 ping statistics --- 00:31:04.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.075 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:31:04.075 00:31:04.075 --- 10.0.0.1 ping statistics --- 00:31:04.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.075 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1384951 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1384951 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1384951 ']' 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:04.075 [2024-12-10 05:55:51.150808] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:04.075 [2024-12-10 05:55:51.151696] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:31:04.075 [2024-12-10 05:55:51.151727] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.075 [2024-12-10 05:55:51.229406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.075 [2024-12-10 05:55:51.268646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.075 [2024-12-10 05:55:51.268683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.075 [2024-12-10 05:55:51.268691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.075 [2024-12-10 05:55:51.268696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.075 [2024-12-10 05:55:51.268702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.075 [2024-12-10 05:55:51.269185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.075 [2024-12-10 05:55:51.336151] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:04.075 [2024-12-10 05:55:51.336355] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:04.075 [2024-12-10 05:55:51.561843] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:04.075 ************************************ 00:31:04.075 START TEST lvs_grow_clean 00:31:04.075 ************************************ 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:04.075 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:04.076 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:04.076 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:04.076 05:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:04.334 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:04.334 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:04.334 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:04.592 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:04.592 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:04.592 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 lvol 150 00:31:04.592 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=50fa287c-a9f5-43fe-a24e-9c875e1a3c8c 00:31:04.592 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:04.592 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:04.851 [2024-12-10 05:55:52.633567] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:04.851 [2024-12-10 05:55:52.633697] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:04.851 true 00:31:04.851 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:04.851 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:05.109 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:05.109 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:05.368 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 50fa287c-a9f5-43fe-a24e-9c875e1a3c8c 00:31:05.368 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.627 [2024-12-10 05:55:53.418077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.627 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:05.885 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1385378 00:31:05.885 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:05.885 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:05.886 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1385378 /var/tmp/bdevperf.sock 00:31:05.886 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1385378 ']' 00:31:05.886 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:05.886 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.886 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:05.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:05.886 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.886 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:05.886 [2024-12-10 05:55:53.676139] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:31:05.886 [2024-12-10 05:55:53.676195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385378 ] 00:31:05.886 [2024-12-10 05:55:53.751147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.144 [2024-12-10 05:55:53.792393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.144 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:06.144 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:06.144 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:06.403 Nvme0n1 00:31:06.403 05:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:06.662 [ 00:31:06.662 { 00:31:06.662 "name": "Nvme0n1", 00:31:06.662 "aliases": [ 00:31:06.662 "50fa287c-a9f5-43fe-a24e-9c875e1a3c8c" 00:31:06.662 ], 00:31:06.662 "product_name": "NVMe disk", 00:31:06.662 "block_size": 4096, 00:31:06.662 "num_blocks": 38912, 00:31:06.662 "uuid": "50fa287c-a9f5-43fe-a24e-9c875e1a3c8c", 00:31:06.662 "numa_id": 1, 00:31:06.662 "assigned_rate_limits": { 00:31:06.662 "rw_ios_per_sec": 0, 00:31:06.662 "rw_mbytes_per_sec": 0, 00:31:06.662 "r_mbytes_per_sec": 0, 00:31:06.662 "w_mbytes_per_sec": 0 00:31:06.662 }, 00:31:06.662 "claimed": false, 00:31:06.662 "zoned": false, 00:31:06.662 "supported_io_types": { 00:31:06.662 "read": true, 00:31:06.662 "write": true, 00:31:06.662 "unmap": true, 00:31:06.662 "flush": true, 00:31:06.662 "reset": true, 00:31:06.662 "nvme_admin": true, 00:31:06.662 "nvme_io": true, 00:31:06.662 "nvme_io_md": false, 00:31:06.662 "write_zeroes": true, 00:31:06.662 "zcopy": false, 00:31:06.662 "get_zone_info": false, 00:31:06.662 "zone_management": false, 00:31:06.662 "zone_append": false, 00:31:06.662 "compare": true, 00:31:06.662 "compare_and_write": true, 00:31:06.662 "abort": true, 00:31:06.662 "seek_hole": false, 00:31:06.662 "seek_data": false, 00:31:06.662 "copy": true, 00:31:06.662 "nvme_iov_md": false 00:31:06.662 }, 00:31:06.662 "memory_domains": [ 00:31:06.662 { 00:31:06.662 "dma_device_id": "system", 00:31:06.662 "dma_device_type": 1 00:31:06.662 } 00:31:06.662 ], 00:31:06.662 "driver_specific": { 00:31:06.662 "nvme": [ 00:31:06.662 { 00:31:06.662 "trid": { 00:31:06.662 "trtype": "TCP", 00:31:06.662 "adrfam": "IPv4", 00:31:06.662 "traddr": "10.0.0.2", 00:31:06.662 "trsvcid": "4420", 00:31:06.662 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:06.662 }, 00:31:06.662 "ctrlr_data": { 00:31:06.662 "cntlid": 1, 00:31:06.662 "vendor_id": "0x8086", 00:31:06.662 "model_number": "SPDK bdev Controller", 00:31:06.662 "serial_number": "SPDK0", 00:31:06.662 "firmware_revision": "25.01", 00:31:06.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.662 "oacs": { 00:31:06.662 "security": 0, 00:31:06.662 "format": 0, 00:31:06.662 "firmware": 0, 00:31:06.662 "ns_manage": 0 00:31:06.662 }, 00:31:06.662 "multi_ctrlr": true, 00:31:06.662 "ana_reporting": false 00:31:06.662 }, 00:31:06.662 "vs": { 00:31:06.662 "nvme_version": "1.3" 00:31:06.662 }, 00:31:06.662 "ns_data": { 00:31:06.662 "id": 1, 00:31:06.662 "can_share": true 00:31:06.662 } 00:31:06.662 } 00:31:06.662 ], 00:31:06.662 "mp_policy": "active_passive" 00:31:06.662 } 00:31:06.662 } 00:31:06.662 ] 00:31:06.662 05:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1385591 00:31:06.662 05:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:06.662 05:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:06.662 Running I/O for 10 seconds... 00:31:07.597 Latency(us) 00:31:07.597 [2024-12-10T04:55:55.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.597 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:31:07.597 [2024-12-10T04:55:55.493Z] =================================================================================================================== 00:31:07.597 [2024-12-10T04:55:55.493Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:31:07.597 00:31:08.533 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:08.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.533 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:08.533 [2024-12-10T04:55:56.429Z] =================================================================================================================== 00:31:08.533 [2024-12-10T04:55:56.429Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:08.533 00:31:08.792 true 00:31:08.792 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:08.792 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:09.050 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:09.050 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:09.050 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1385591 00:31:09.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.617 Nvme0n1 : 3.00 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:31:09.617 [2024-12-10T04:55:57.513Z] =================================================================================================================== 00:31:09.617 [2024-12-10T04:55:57.513Z] Total : 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:31:09.617 00:31:10.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:10.553 Nvme0n1 : 4.00 23272.75 90.91 0.00 0.00 0.00 0.00 0.00 00:31:10.553 [2024-12-10T04:55:58.449Z] =================================================================================================================== 00:31:10.553 [2024-12-10T04:55:58.449Z] Total : 23272.75 90.91 0.00 0.00 0.00 0.00 0.00 00:31:10.553 00:31:11.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:11.930 Nvme0n1 : 5.00 23349.40 91.21 0.00 0.00 0.00 0.00 0.00 00:31:11.930 [2024-12-10T04:55:59.826Z] =================================================================================================================== 00:31:11.930 [2024-12-10T04:55:59.826Z] Total : 23349.40 91.21 0.00 0.00 0.00 0.00 0.00 00:31:11.930 00:31:12.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:12.865 Nvme0n1 : 6.00 23363.17 91.26 0.00 0.00 0.00 0.00 0.00 00:31:12.865 [2024-12-10T04:56:00.761Z] =================================================================================================================== 00:31:12.865 [2024-12-10T04:56:00.761Z] Total : 23363.17 91.26 0.00 0.00 0.00 0.00 0.00 00:31:12.865 00:31:13.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:13.801 Nvme0n1 : 7.00 23398.00 91.40 0.00 0.00 0.00 0.00 0.00 00:31:13.801 [2024-12-10T04:56:01.697Z] =================================================================================================================== 00:31:13.801 [2024-12-10T04:56:01.697Z] Total : 23398.00 91.40 0.00 0.00 0.00 0.00 0.00 00:31:13.801 00:31:14.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:14.736 Nvme0n1 : 8.00 23426.00 91.51 0.00 0.00 0.00 0.00 0.00 00:31:14.736 [2024-12-10T04:56:02.632Z] =================================================================================================================== 00:31:14.736 [2024-12-10T04:56:02.632Z] Total : 23426.00 91.51 0.00 0.00 0.00 0.00 0.00 00:31:14.736 00:31:15.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:15.672 Nvme0n1 : 9.00 23461.89 91.65 0.00 0.00 0.00 0.00 0.00 00:31:15.672 [2024-12-10T04:56:03.568Z] =================================================================================================================== 00:31:15.672 [2024-12-10T04:56:03.568Z] Total : 23461.89 91.65 0.00 0.00 0.00 0.00 0.00 00:31:15.672 00:31:16.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:16.608 Nvme0n1 : 10.00 23477.90 91.71 0.00 0.00 0.00 0.00 0.00 00:31:16.608 [2024-12-10T04:56:04.504Z] =================================================================================================================== 00:31:16.608 [2024-12-10T04:56:04.504Z] Total : 23477.90 91.71 0.00 0.00 0.00 0.00 0.00 00:31:16.608 00:31:16.608 00:31:16.608 Latency(us) 00:31:16.608 [2024-12-10T04:56:04.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:16.608 Nvme0n1 : 10.00 23483.81 91.73 0.00 0.00 5447.85 3167.57 27837.20 00:31:16.608 [2024-12-10T04:56:04.504Z] =================================================================================================================== 00:31:16.608 [2024-12-10T04:56:04.504Z] Total : 23483.81 91.73 0.00 0.00 5447.85 3167.57 27837.20 00:31:16.608 { 00:31:16.608 "results": [ 00:31:16.608 { 00:31:16.608 "job": "Nvme0n1", 00:31:16.608 "core_mask": "0x2", 00:31:16.608 "workload": "randwrite", 00:31:16.608 "status": "finished", 00:31:16.608 "queue_depth": 128, 00:31:16.608 "io_size": 4096, 00:31:16.608 "runtime": 10.002936, 00:31:16.608 "iops": 23483.80515480655, 00:31:16.608 "mibps": 91.73361388596308, 00:31:16.608 "io_failed": 0, 00:31:16.608 "io_timeout": 0, 00:31:16.608 "avg_latency_us": 5447.851518884779, 00:31:16.608 "min_latency_us": 3167.5733333333333, 00:31:16.608 "max_latency_us": 27837.196190476192 00:31:16.608 } 00:31:16.608 ], 00:31:16.608 "core_count": 1 00:31:16.608 } 00:31:16.608 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1385378 00:31:16.608 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1385378 ']' 00:31:16.608 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1385378 00:31:16.608 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:31:16.608 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:16.608 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1385378 00:31:16.867 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:16.867 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:16.867 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1385378' 00:31:16.867 killing process with pid 1385378 00:31:16.867 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1385378 00:31:16.867 Received shutdown signal, test time was about 10.000000 seconds 00:31:16.867 00:31:16.867 Latency(us) 00:31:16.867 [2024-12-10T04:56:04.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.867 [2024-12-10T04:56:04.763Z] =================================================================================================================== 00:31:16.867 [2024-12-10T04:56:04.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:16.867 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1385378 00:31:16.867 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:17.126 05:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:17.385 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:17.385 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:17.385 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:17.385 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:17.385 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:17.644 [2024-12-10 05:56:05.429628] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:17.644 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:17.903 request: 00:31:17.903 { 00:31:17.903 "uuid": "6a0373cb-fbf8-477b-a233-2e9472aa8ca6", 00:31:17.903 "method": "bdev_lvol_get_lvstores", 00:31:17.903 "req_id": 1 00:31:17.903 } 00:31:17.903 Got JSON-RPC error response 00:31:17.903 response: 00:31:17.903 { 00:31:17.903 "code": -19, 00:31:17.903 "message": "No such device" 00:31:17.903 } 00:31:17.903 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:31:17.903 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:17.903 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:17.903 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:17.903 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:18.162 aio_bdev 00:31:18.162 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 50fa287c-a9f5-43fe-a24e-9c875e1a3c8c 00:31:18.162 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=50fa287c-a9f5-43fe-a24e-9c875e1a3c8c 00:31:18.162 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:18.162 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:31:18.162 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:18.162 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:18.162 05:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:18.162 05:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 50fa287c-a9f5-43fe-a24e-9c875e1a3c8c -t 2000 00:31:18.421 [ 00:31:18.421 { 00:31:18.421 "name": "50fa287c-a9f5-43fe-a24e-9c875e1a3c8c", 00:31:18.421 "aliases": [ 00:31:18.421 "lvs/lvol" 00:31:18.421 ], 00:31:18.421 "product_name": "Logical Volume", 00:31:18.421 "block_size": 4096, 00:31:18.421 "num_blocks": 38912, 00:31:18.421 "uuid": "50fa287c-a9f5-43fe-a24e-9c875e1a3c8c", 00:31:18.421 "assigned_rate_limits": { 00:31:18.421 "rw_ios_per_sec": 0, 00:31:18.421 "rw_mbytes_per_sec": 0, 00:31:18.421 "r_mbytes_per_sec": 0, 00:31:18.421 "w_mbytes_per_sec": 0 00:31:18.421 }, 00:31:18.421 "claimed": false, 00:31:18.421 "zoned": false, 00:31:18.421 "supported_io_types": { 00:31:18.421 "read": true, 00:31:18.421 "write": true, 00:31:18.421 "unmap": true, 00:31:18.421 "flush": false, 00:31:18.421 "reset": true, 00:31:18.421 "nvme_admin": false, 00:31:18.421 "nvme_io": false, 00:31:18.421 "nvme_io_md": false, 00:31:18.421 "write_zeroes": true, 00:31:18.421 "zcopy": false, 00:31:18.421 "get_zone_info": false, 00:31:18.421 "zone_management": false, 00:31:18.421 "zone_append": false, 00:31:18.421 "compare": false, 00:31:18.421 "compare_and_write": false, 00:31:18.421 "abort": false, 00:31:18.421 "seek_hole": true, 00:31:18.421 "seek_data": true, 00:31:18.421 "copy": false, 00:31:18.421 "nvme_iov_md": false 00:31:18.421 }, 00:31:18.421 "driver_specific": { 00:31:18.421 "lvol": { 00:31:18.421 "lvol_store_uuid": "6a0373cb-fbf8-477b-a233-2e9472aa8ca6", 00:31:18.421 "base_bdev": "aio_bdev", 00:31:18.421 "thin_provision": false, 00:31:18.421 "num_allocated_clusters": 38, 00:31:18.421 "snapshot": false, 00:31:18.421 "clone": false, 00:31:18.421 "esnap_clone": false 00:31:18.421 } 00:31:18.421 } 00:31:18.421 } 00:31:18.421 ] 00:31:18.421 05:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:18.421 05:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:18.421 05:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:18.680 05:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:18.680 05:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:18.680 05:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:18.939 05:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:18.939 05:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 50fa287c-a9f5-43fe-a24e-9c875e1a3c8c 00:31:18.939 05:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a0373cb-fbf8-477b-a233-2e9472aa8ca6 00:31:19.198 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:19.457 00:31:19.457 real 0m15.631s 00:31:19.457 user 0m15.102s 00:31:19.457 sys 0m1.530s 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:19.457 ************************************ 00:31:19.457 END TEST lvs_grow_clean 00:31:19.457 ************************************ 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:19.457 ************************************ 00:31:19.457 START TEST lvs_grow_dirty 00:31:19.457 ************************************ 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:19.457 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:19.716 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:19.716 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:19.974 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:19.974 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:19.974 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:20.233 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:20.233 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:20.233 05:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa lvol 150 00:31:20.491 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=474616d3-2d67-4f3c-92de-da3ce2d2e172 00:31:20.491 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:20.491 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:20.491 [2024-12-10 05:56:08.305568] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:20.491 [2024-12-10 05:56:08.305693] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:20.491 true 00:31:20.491 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:20.491 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:20.750 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:20.750 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:21.008 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 474616d3-2d67-4f3c-92de-da3ce2d2e172 00:31:21.266 05:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:21.266 [2024-12-10 05:56:09.073984] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.266 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:21.525 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1387890 00:31:21.525 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:21.525 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:21.525 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1387890 /var/tmp/bdevperf.sock 00:31:21.525 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1387890 ']' 00:31:21.525 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:21.525 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:21.525 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:21.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:21.525 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:21.525 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:21.525 [2024-12-10 05:56:09.328027] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:31:21.525 [2024-12-10 05:56:09.328077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387890 ] 00:31:21.525 [2024-12-10 05:56:09.402910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.784 [2024-12-10 05:56:09.443948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.784 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:21.784 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:21.784 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:22.043 Nvme0n1 00:31:22.043 05:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:22.302 [ 00:31:22.302 { 00:31:22.302 "name": "Nvme0n1", 00:31:22.302 "aliases": [ 00:31:22.302 "474616d3-2d67-4f3c-92de-da3ce2d2e172" 00:31:22.302 ], 00:31:22.302 "product_name": "NVMe disk", 00:31:22.302 "block_size": 4096, 00:31:22.302 "num_blocks": 38912, 00:31:22.302 "uuid": "474616d3-2d67-4f3c-92de-da3ce2d2e172", 00:31:22.302 "numa_id": 1, 00:31:22.302 "assigned_rate_limits": { 00:31:22.302 "rw_ios_per_sec": 0, 00:31:22.302 "rw_mbytes_per_sec": 0, 00:31:22.302 "r_mbytes_per_sec": 0, 00:31:22.302 "w_mbytes_per_sec": 0 00:31:22.302 }, 00:31:22.302 "claimed": false, 00:31:22.302 "zoned": false, 00:31:22.302 "supported_io_types": { 00:31:22.302 "read": true, 00:31:22.302 "write": true, 00:31:22.302 "unmap": true, 00:31:22.302 "flush": true, 00:31:22.302 "reset": true, 00:31:22.302 "nvme_admin": true, 00:31:22.302 "nvme_io": true, 00:31:22.302 "nvme_io_md": false, 00:31:22.302 "write_zeroes": true, 00:31:22.302 "zcopy": false, 00:31:22.302 "get_zone_info": false, 00:31:22.302 "zone_management": false, 00:31:22.302 "zone_append": false, 00:31:22.302 "compare": true, 00:31:22.302 "compare_and_write": true, 00:31:22.302 "abort": true, 00:31:22.302 "seek_hole": false, 00:31:22.302 "seek_data": false, 00:31:22.302 "copy": true, 00:31:22.302 "nvme_iov_md": false 00:31:22.302 }, 00:31:22.302 "memory_domains": [ 00:31:22.302 { 00:31:22.302 "dma_device_id": "system", 00:31:22.302 "dma_device_type": 1 00:31:22.302 } 00:31:22.302 ], 00:31:22.302 "driver_specific": { 00:31:22.302 "nvme": [ 00:31:22.302 { 00:31:22.302 "trid": { 00:31:22.302 "trtype": "TCP", 00:31:22.302 "adrfam": "IPv4", 00:31:22.302 "traddr": "10.0.0.2", 00:31:22.302 "trsvcid": "4420", 00:31:22.302 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:22.302 }, 00:31:22.302 "ctrlr_data": { 00:31:22.302 "cntlid": 1, 00:31:22.302 "vendor_id": "0x8086", 00:31:22.302 "model_number": "SPDK bdev Controller", 00:31:22.302 "serial_number": "SPDK0", 00:31:22.302 "firmware_revision": "25.01", 00:31:22.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:22.302 "oacs": { 00:31:22.302 "security": 0, 00:31:22.302 "format": 0, 00:31:22.302 "firmware": 0, 00:31:22.302 "ns_manage": 0 00:31:22.302 }, 00:31:22.302 "multi_ctrlr": true, 00:31:22.302 "ana_reporting": false 00:31:22.302 }, 00:31:22.302 "vs": { 00:31:22.302 "nvme_version": "1.3" 00:31:22.302 }, 00:31:22.302 "ns_data": { 00:31:22.302 "id": 1, 00:31:22.302 "can_share": true 00:31:22.302 } 00:31:22.302 } 00:31:22.302 ], 00:31:22.302 "mp_policy": "active_passive" 00:31:22.302 } 00:31:22.302 } 00:31:22.302 ] 00:31:22.302 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1388110 00:31:22.302 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:22.302 05:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:22.302 Running I/O for 10 seconds... 00:31:23.679 Latency(us) 00:31:23.679 [2024-12-10T04:56:11.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:23.680 Nvme0n1 : 1.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:31:23.680 [2024-12-10T04:56:11.576Z] =================================================================================================================== 00:31:23.680 [2024-12-10T04:56:11.576Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:31:23.680 00:31:24.247 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:24.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:24.506 Nvme0n1 : 2.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:31:24.506 [2024-12-10T04:56:12.402Z] =================================================================================================================== 00:31:24.506 [2024-12-10T04:56:12.402Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:31:24.506 00:31:24.506 true 00:31:24.506 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:24.506 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:24.765 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:24.765 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:24.765 05:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1388110 00:31:25.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:25.333 Nvme0n1 : 3.00 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:31:25.333 [2024-12-10T04:56:13.229Z] =================================================================================================================== 00:31:25.333 [2024-12-10T04:56:13.229Z] Total : 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:31:25.333 00:31:26.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:26.709 Nvme0n1 : 4.00 23272.75 90.91 0.00 0.00 0.00 0.00 0.00 00:31:26.709 [2024-12-10T04:56:14.606Z] =================================================================================================================== 00:31:26.710 [2024-12-10T04:56:14.606Z] Total : 23272.75 90.91 0.00 0.00 0.00 0.00 0.00 00:31:26.710 00:31:27.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:27.645 Nvme0n1 : 5.00 23355.40 91.23 0.00 0.00 0.00 0.00 0.00 00:31:27.645 [2024-12-10T04:56:15.541Z] =================================================================================================================== 00:31:27.645 [2024-12-10T04:56:15.541Z] Total : 23355.40 91.23 0.00 0.00 0.00 0.00 0.00 00:31:27.645 00:31:28.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:28.581 Nvme0n1 : 6.00 23397.33 91.40 0.00 0.00 0.00 0.00 0.00 00:31:28.581 [2024-12-10T04:56:16.477Z] =================================================================================================================== 00:31:28.581 [2024-12-10T04:56:16.477Z] Total : 23397.33 91.40 0.00 0.00 0.00 0.00 0.00 00:31:28.581 00:31:29.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:29.517 Nvme0n1 : 7.00 23447.57 91.59 0.00 0.00 0.00 0.00 0.00 00:31:29.517 [2024-12-10T04:56:17.413Z] =================================================================================================================== 00:31:29.517 [2024-12-10T04:56:17.413Z] Total : 23447.57 91.59 0.00 0.00 0.00 0.00 0.00 00:31:29.517 00:31:30.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:30.454 Nvme0n1 : 8.00 23485.25 91.74 0.00 0.00 0.00 0.00 0.00 00:31:30.454 [2024-12-10T04:56:18.350Z] =================================================================================================================== 00:31:30.454 [2024-12-10T04:56:18.350Z] Total : 23485.25 91.74 0.00 0.00 0.00 0.00 0.00 00:31:30.454 00:31:31.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:31.390 Nvme0n1 : 9.00 23514.56 91.85 0.00 0.00 0.00 0.00 0.00 00:31:31.390 [2024-12-10T04:56:19.286Z] =================================================================================================================== 00:31:31.390 [2024-12-10T04:56:19.286Z] Total : 23514.56 91.85 0.00 0.00 0.00 0.00 0.00 00:31:31.390 00:31:32.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:32.325 Nvme0n1 : 10.00 23519.00 91.87 0.00 0.00 0.00 0.00 0.00 00:31:32.325 [2024-12-10T04:56:20.221Z] =================================================================================================================== 00:31:32.325 [2024-12-10T04:56:20.222Z] Total : 23519.00 91.87 0.00 0.00 0.00 0.00 0.00 00:31:32.326 00:31:32.326 00:31:32.326 Latency(us) 00:31:32.326 [2024-12-10T04:56:20.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:32.326 Nvme0n1 : 10.00 23523.18 91.89 0.00 0.00 5438.12 2621.44 27712.37 00:31:32.326 [2024-12-10T04:56:20.222Z] =================================================================================================================== 00:31:32.326 [2024-12-10T04:56:20.222Z] Total : 23523.18 91.89 0.00 0.00 5438.12 2621.44 27712.37 00:31:32.326 { 00:31:32.326 "results": [ 00:31:32.326 { 00:31:32.326 "job": "Nvme0n1", 00:31:32.326 "core_mask": "0x2", 00:31:32.326 "workload": "randwrite", 00:31:32.326 "status": "finished", 00:31:32.326 "queue_depth": 128, 00:31:32.326 "io_size": 4096, 00:31:32.326 "runtime": 10.003664, 00:31:32.326 "iops": 23523.1811064426, 00:31:32.326 "mibps": 91.8874261970414, 00:31:32.326 "io_failed": 0, 00:31:32.326 "io_timeout": 0, 00:31:32.326 "avg_latency_us": 5438.11905445883, 00:31:32.326 "min_latency_us": 2621.44, 00:31:32.326 "max_latency_us": 27712.365714285716 00:31:32.326 } 00:31:32.326 ], 00:31:32.326 "core_count": 1 00:31:32.326 } 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1387890 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1387890 ']' 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1387890 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1387890 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1387890' 00:31:32.585 killing process with pid 1387890 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1387890 00:31:32.585 Received shutdown signal, test time was about 10.000000 seconds 00:31:32.585 00:31:32.585 Latency(us) 00:31:32.585 [2024-12-10T04:56:20.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.585 [2024-12-10T04:56:20.481Z] =================================================================================================================== 00:31:32.585 [2024-12-10T04:56:20.481Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1387890 00:31:32.585 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:32.844 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:33.102 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:33.102 05:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1384951 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1384951 00:31:33.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1384951 Killed "${NVMF_APP[@]}" "$@" 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1389893 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1389893 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1389893 ']' 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:33.361 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:33.361 [2024-12-10 05:56:21.175288] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:33.361 [2024-12-10 05:56:21.176181] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:31:33.361 [2024-12-10 05:56:21.176217] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.620 [2024-12-10 05:56:21.253246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.620 [2024-12-10 05:56:21.291822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.620 [2024-12-10 05:56:21.291858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.620 [2024-12-10 05:56:21.291865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.620 [2024-12-10 05:56:21.291871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.620 [2024-12-10 05:56:21.291876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.620 [2024-12-10 05:56:21.292323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.620 [2024-12-10 05:56:21.358369] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:33.620 [2024-12-10 05:56:21.358561] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:33.620 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:33.620 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:33.620 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:33.620 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:33.620 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:33.620 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:33.620 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:33.879 [2024-12-10 05:56:21.593657] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:33.879 [2024-12-10 05:56:21.593856] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:33.879 [2024-12-10 05:56:21.593941] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:33.879 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:33.879 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 474616d3-2d67-4f3c-92de-da3ce2d2e172 00:31:33.879 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=474616d3-2d67-4f3c-92de-da3ce2d2e172 00:31:33.879 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:33.879 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:33.879 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:33.879 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:33.879 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:34.138 05:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 474616d3-2d67-4f3c-92de-da3ce2d2e172 -t 2000 00:31:34.138 [ 00:31:34.138 { 00:31:34.138 "name": "474616d3-2d67-4f3c-92de-da3ce2d2e172", 00:31:34.138 "aliases": [ 00:31:34.138 "lvs/lvol" 00:31:34.138 ], 00:31:34.138 "product_name": "Logical Volume", 00:31:34.138 "block_size": 4096, 00:31:34.138 "num_blocks": 38912, 00:31:34.138 "uuid": "474616d3-2d67-4f3c-92de-da3ce2d2e172", 00:31:34.138 "assigned_rate_limits": { 00:31:34.138 "rw_ios_per_sec": 0, 00:31:34.138 "rw_mbytes_per_sec": 0, 00:31:34.138 "r_mbytes_per_sec": 0, 00:31:34.138 "w_mbytes_per_sec": 0 00:31:34.138 }, 00:31:34.138 "claimed": false, 00:31:34.138 "zoned": false, 00:31:34.138 "supported_io_types": { 00:31:34.138 "read": true, 00:31:34.138 "write": true, 00:31:34.138 "unmap": true, 00:31:34.138 "flush": false, 00:31:34.138 "reset": true, 00:31:34.138 "nvme_admin": false, 00:31:34.138 "nvme_io": false, 00:31:34.138 "nvme_io_md": false, 00:31:34.138 "write_zeroes": true, 00:31:34.138 "zcopy": false, 00:31:34.138 "get_zone_info": false, 00:31:34.138 "zone_management": false, 00:31:34.138 "zone_append": false, 00:31:34.138 "compare": false, 00:31:34.138 "compare_and_write": false, 00:31:34.138 "abort": false, 00:31:34.138 "seek_hole": true, 00:31:34.138 "seek_data": true, 00:31:34.138 "copy": false, 00:31:34.138 "nvme_iov_md": false 00:31:34.138 }, 00:31:34.138 "driver_specific": { 00:31:34.138 "lvol": { 00:31:34.138 "lvol_store_uuid": "2c9bbcce-beb3-4860-b5d9-0ce100981faa", 00:31:34.138 "base_bdev": "aio_bdev", 00:31:34.138 "thin_provision": false, 00:31:34.138 "num_allocated_clusters": 38, 00:31:34.138 "snapshot": false, 00:31:34.138 "clone": false, 00:31:34.138 "esnap_clone": false 00:31:34.138 } 00:31:34.138 } 00:31:34.138 } 00:31:34.138 ] 00:31:34.138 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:34.138 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:34.138 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:34.396 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:34.396 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:34.396 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:34.655 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:34.655 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:34.913 [2024-12-10 05:56:22.576769] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:34.913 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:34.914 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:34.914 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:34.914 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:34.914 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:34.914 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:34.914 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:34.914 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:34.914 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:34.914 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:34.914 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:34.914 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:34.914 request: 00:31:34.914 { 00:31:34.914 "uuid": "2c9bbcce-beb3-4860-b5d9-0ce100981faa", 00:31:34.914 "method": "bdev_lvol_get_lvstores", 00:31:34.914 "req_id": 1 00:31:34.914 } 00:31:34.914 Got JSON-RPC error response 00:31:34.914 response: 00:31:34.914 { 00:31:34.914 "code": -19, 00:31:34.914 "message": "No such device" 00:31:34.914 } 00:31:35.172 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:35.172 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:35.172 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:35.172 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:35.172 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:35.172 aio_bdev 00:31:35.172 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 474616d3-2d67-4f3c-92de-da3ce2d2e172 00:31:35.172 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=474616d3-2d67-4f3c-92de-da3ce2d2e172 00:31:35.172 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:35.172 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:35.172 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:35.173 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:35.173 05:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:35.431 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 474616d3-2d67-4f3c-92de-da3ce2d2e172 -t 2000 00:31:35.689 [ 00:31:35.689 { 00:31:35.689 "name": "474616d3-2d67-4f3c-92de-da3ce2d2e172", 00:31:35.689 "aliases": [ 00:31:35.689 "lvs/lvol" 00:31:35.689 ], 00:31:35.689 "product_name": "Logical Volume", 00:31:35.689 "block_size": 4096, 00:31:35.689 "num_blocks": 38912, 00:31:35.689 "uuid": "474616d3-2d67-4f3c-92de-da3ce2d2e172", 00:31:35.689 "assigned_rate_limits": { 00:31:35.689 "rw_ios_per_sec": 0, 00:31:35.689 "rw_mbytes_per_sec": 0, 00:31:35.689 "r_mbytes_per_sec": 0, 00:31:35.689 "w_mbytes_per_sec": 0 00:31:35.689 }, 00:31:35.689 "claimed": false, 00:31:35.689 "zoned": false, 00:31:35.689 "supported_io_types": { 00:31:35.689 "read": true, 00:31:35.689 "write": true, 00:31:35.689 "unmap": true, 00:31:35.689 "flush": false, 00:31:35.689 "reset": true, 00:31:35.689 "nvme_admin": false, 00:31:35.689 "nvme_io": false, 00:31:35.689 "nvme_io_md": false, 00:31:35.689 "write_zeroes": true, 00:31:35.689 "zcopy": false, 00:31:35.689 "get_zone_info": false, 00:31:35.689 "zone_management": false, 00:31:35.689 "zone_append": false, 00:31:35.689 "compare": false, 00:31:35.689 "compare_and_write": false, 00:31:35.689 "abort": false, 00:31:35.689 "seek_hole": true, 00:31:35.689 "seek_data": true, 00:31:35.689 "copy": false, 00:31:35.689 "nvme_iov_md": false 00:31:35.689 }, 00:31:35.689 "driver_specific": { 00:31:35.689 "lvol": { 00:31:35.689 "lvol_store_uuid": "2c9bbcce-beb3-4860-b5d9-0ce100981faa", 00:31:35.689 "base_bdev": "aio_bdev", 00:31:35.689 "thin_provision": false, 00:31:35.690 "num_allocated_clusters": 38, 00:31:35.690 "snapshot": false, 00:31:35.690 "clone": false, 00:31:35.690 "esnap_clone": false 00:31:35.690 } 00:31:35.690 } 00:31:35.690 } 00:31:35.690 ] 00:31:35.690 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:35.690 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:35.690 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:35.690 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:35.690 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:35.690 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:35.948 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:35.948 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 474616d3-2d67-4f3c-92de-da3ce2d2e172 00:31:36.207 05:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c9bbcce-beb3-4860-b5d9-0ce100981faa 00:31:36.466 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:36.466 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:36.725 00:31:36.725 real 0m17.064s 00:31:36.725 user 0m34.506s 00:31:36.725 sys 0m3.783s 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:36.725 ************************************ 00:31:36.725 END TEST lvs_grow_dirty 00:31:36.725 ************************************ 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:36.725 nvmf_trace.0 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:36.725 rmmod nvme_tcp 00:31:36.725 rmmod nvme_fabrics 00:31:36.725 rmmod nvme_keyring 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1389893 ']' 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1389893 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1389893 ']' 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1389893 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1389893 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1389893' 00:31:36.725 killing process with pid 1389893 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1389893 00:31:36.725 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1389893 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.985 05:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.999 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:38.999 00:31:38.999 real 0m41.861s 00:31:38.999 user 0m52.136s 00:31:38.999 sys 0m10.171s 00:31:38.999 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:38.999 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:38.999 ************************************ 00:31:38.999 END TEST nvmf_lvs_grow 00:31:38.999 ************************************ 00:31:38.999 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:38.999 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:38.999 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:38.999 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:39.259 ************************************ 00:31:39.259 START TEST nvmf_bdev_io_wait 00:31:39.259 ************************************ 00:31:39.259 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:39.259 * Looking for test storage... 00:31:39.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:39.259 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:39.259 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:31:39.259 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:39.259 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:39.259 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.259 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.259 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.259 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:39.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.260 --rc genhtml_branch_coverage=1 00:31:39.260 --rc genhtml_function_coverage=1 00:31:39.260 --rc genhtml_legend=1 00:31:39.260 --rc geninfo_all_blocks=1 00:31:39.260 --rc geninfo_unexecuted_blocks=1 00:31:39.260 00:31:39.260 ' 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:39.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.260 --rc genhtml_branch_coverage=1 00:31:39.260 --rc genhtml_function_coverage=1 00:31:39.260 --rc genhtml_legend=1 00:31:39.260 --rc geninfo_all_blocks=1 00:31:39.260 --rc geninfo_unexecuted_blocks=1 00:31:39.260 00:31:39.260 ' 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:39.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.260 --rc genhtml_branch_coverage=1 00:31:39.260 --rc genhtml_function_coverage=1 00:31:39.260 --rc genhtml_legend=1 00:31:39.260 --rc geninfo_all_blocks=1 00:31:39.260 --rc geninfo_unexecuted_blocks=1 00:31:39.260 00:31:39.260 ' 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:39.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.260 --rc genhtml_branch_coverage=1 00:31:39.260 --rc genhtml_function_coverage=1 00:31:39.260 --rc genhtml_legend=1 00:31:39.260 --rc geninfo_all_blocks=1 00:31:39.260 --rc geninfo_unexecuted_blocks=1 00:31:39.260 00:31:39.260 ' 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:39.260 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:39.261 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.261 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:39.261 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:39.261 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:39.261 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.261 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.261 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.261 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:39.261 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:39.261 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:39.261 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:45.830 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:45.830 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:45.830 Found net devices under 0000:af:00.0: cvl_0_0 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:45.830 Found net devices under 0000:af:00.1: cvl_0_1 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:45.830 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:45.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:45.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:31:45.831 00:31:45.831 --- 10.0.0.2 ping statistics --- 00:31:45.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.831 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:45.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:45.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:31:45.831 00:31:45.831 --- 10.0.0.1 ping statistics --- 00:31:45.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.831 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:45.831 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1393877 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1393877 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1393877 ']' 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.831 [2024-12-10 05:56:33.089900] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:45.831 [2024-12-10 05:56:33.090791] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:31:45.831 [2024-12-10 05:56:33.090822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:45.831 [2024-12-10 05:56:33.167730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:45.831 [2024-12-10 05:56:33.209453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:45.831 [2024-12-10 05:56:33.209491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:45.831 [2024-12-10 05:56:33.209501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:45.831 [2024-12-10 05:56:33.209508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:45.831 [2024-12-10 05:56:33.209514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:45.831 [2024-12-10 05:56:33.210978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.831 [2024-12-10 05:56:33.211088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:45.831 [2024-12-10 05:56:33.211213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.831 [2024-12-10 05:56:33.211213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:45.831 [2024-12-10 05:56:33.211515] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.831 [2024-12-10 05:56:33.343878] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:45.831 [2024-12-10 05:56:33.344506] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:45.831 [2024-12-10 05:56:33.344509] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:45.831 [2024-12-10 05:56:33.344667] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.831 [2024-12-10 05:56:33.355894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.831 Malloc0 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.831 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.832 [2024-12-10 05:56:33.423963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1393899 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1393901 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:45.832 { 00:31:45.832 "params": { 00:31:45.832 "name": "Nvme$subsystem", 00:31:45.832 "trtype": "$TEST_TRANSPORT", 00:31:45.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:45.832 "adrfam": "ipv4", 00:31:45.832 "trsvcid": "$NVMF_PORT", 00:31:45.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:45.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:45.832 "hdgst": ${hdgst:-false}, 00:31:45.832 "ddgst": ${ddgst:-false} 00:31:45.832 }, 00:31:45.832 "method": "bdev_nvme_attach_controller" 00:31:45.832 } 00:31:45.832 EOF 00:31:45.832 )") 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1393903 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:45.832 { 00:31:45.832 "params": { 00:31:45.832 "name": "Nvme$subsystem", 00:31:45.832 "trtype": "$TEST_TRANSPORT", 00:31:45.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:45.832 "adrfam": "ipv4", 00:31:45.832 "trsvcid": "$NVMF_PORT", 00:31:45.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:45.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:45.832 "hdgst": ${hdgst:-false}, 00:31:45.832 "ddgst": ${ddgst:-false} 00:31:45.832 }, 00:31:45.832 "method": "bdev_nvme_attach_controller" 00:31:45.832 } 00:31:45.832 EOF 00:31:45.832 )") 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1393906 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:45.832 { 00:31:45.832 "params": { 00:31:45.832 "name": "Nvme$subsystem", 00:31:45.832 "trtype": "$TEST_TRANSPORT", 00:31:45.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:45.832 "adrfam": "ipv4", 00:31:45.832 "trsvcid": "$NVMF_PORT", 00:31:45.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:45.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:45.832 "hdgst": ${hdgst:-false}, 00:31:45.832 "ddgst": ${ddgst:-false} 00:31:45.832 }, 00:31:45.832 "method": "bdev_nvme_attach_controller" 00:31:45.832 } 00:31:45.832 EOF 00:31:45.832 )") 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:45.832 { 00:31:45.832 "params": { 00:31:45.832 "name": "Nvme$subsystem", 00:31:45.832 "trtype": "$TEST_TRANSPORT", 00:31:45.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:45.832 "adrfam": "ipv4", 00:31:45.832 "trsvcid": "$NVMF_PORT", 00:31:45.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:45.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:45.832 "hdgst": ${hdgst:-false}, 00:31:45.832 "ddgst": ${ddgst:-false} 00:31:45.832 }, 00:31:45.832 "method": "bdev_nvme_attach_controller" 00:31:45.832 } 00:31:45.832 EOF 00:31:45.832 )") 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1393899 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:45.832 "params": { 00:31:45.832 "name": "Nvme1", 00:31:45.832 "trtype": "tcp", 00:31:45.832 "traddr": "10.0.0.2", 00:31:45.832 "adrfam": "ipv4", 00:31:45.832 "trsvcid": "4420", 00:31:45.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:45.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:45.832 "hdgst": false, 00:31:45.832 "ddgst": false 00:31:45.832 }, 00:31:45.832 "method": "bdev_nvme_attach_controller" 00:31:45.832 }' 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:45.832 "params": { 00:31:45.832 "name": "Nvme1", 00:31:45.832 "trtype": "tcp", 00:31:45.832 "traddr": "10.0.0.2", 00:31:45.832 "adrfam": "ipv4", 00:31:45.832 "trsvcid": "4420", 00:31:45.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:45.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:45.832 "hdgst": false, 00:31:45.832 "ddgst": false 00:31:45.832 }, 00:31:45.832 "method": "bdev_nvme_attach_controller" 00:31:45.832 }' 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:45.832 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:45.832 "params": { 00:31:45.832 "name": "Nvme1", 00:31:45.832 "trtype": "tcp", 00:31:45.832 "traddr": "10.0.0.2", 00:31:45.832 "adrfam": "ipv4", 00:31:45.832 "trsvcid": "4420", 00:31:45.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:45.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:45.832 "hdgst": false, 00:31:45.832 "ddgst": false 00:31:45.832 }, 00:31:45.832 "method": "bdev_nvme_attach_controller" 00:31:45.832 }' 00:31:45.833 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:45.833 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:45.833 "params": { 00:31:45.833 "name": "Nvme1", 00:31:45.833 "trtype": "tcp", 00:31:45.833 "traddr": "10.0.0.2", 00:31:45.833 "adrfam": "ipv4", 00:31:45.833 "trsvcid": "4420", 00:31:45.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:45.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:45.833 "hdgst": false, 00:31:45.833 "ddgst": false 00:31:45.833 }, 00:31:45.833 "method": "bdev_nvme_attach_controller" 00:31:45.833 }' 00:31:45.833 [2024-12-10 05:56:33.475280] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:31:45.833 [2024-12-10 05:56:33.475321] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:45.833 [2024-12-10 05:56:33.476842] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:31:45.833 [2024-12-10 05:56:33.476892] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:45.833 [2024-12-10 05:56:33.478925] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:31:45.833 [2024-12-10 05:56:33.478966] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:45.833 [2024-12-10 05:56:33.480615] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:31:45.833 [2024-12-10 05:56:33.480655] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:45.833 [2024-12-10 05:56:33.616048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.833 [2024-12-10 05:56:33.649613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:46.091 [2024-12-10 05:56:33.723409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.092 [2024-12-10 05:56:33.768525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:46.092 [2024-12-10 05:56:33.821124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.092 [2024-12-10 05:56:33.866536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:46.092 [2024-12-10 05:56:33.920357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.092 [2024-12-10 05:56:33.974858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:46.350 Running I/O for 1 seconds... 00:31:46.350 Running I/O for 1 seconds... 00:31:46.350 Running I/O for 1 seconds... 00:31:46.608 Running I/O for 1 seconds... 00:31:47.544 8830.00 IOPS, 34.49 MiB/s 00:31:47.544 Latency(us) 00:31:47.544 [2024-12-10T04:56:35.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.544 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:47.544 Nvme1n1 : 1.02 8811.49 34.42 0.00 0.00 14430.36 3417.23 21470.84 00:31:47.544 [2024-12-10T04:56:35.440Z] =================================================================================================================== 00:31:47.544 [2024-12-10T04:56:35.440Z] Total : 8811.49 34.42 0.00 0.00 14430.36 3417.23 21470.84 00:31:47.544 242328.00 IOPS, 946.59 MiB/s 00:31:47.544 Latency(us) 00:31:47.544 [2024-12-10T04:56:35.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.544 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:47.544 Nvme1n1 : 1.00 241959.27 945.15 0.00 0.00 526.06 225.28 1497.97 00:31:47.544 [2024-12-10T04:56:35.440Z] =================================================================================================================== 00:31:47.544 [2024-12-10T04:56:35.440Z] Total : 241959.27 945.15 0.00 0.00 526.06 225.28 1497.97 00:31:47.544 7814.00 IOPS, 30.52 MiB/s 00:31:47.544 Latency(us) 00:31:47.544 [2024-12-10T04:56:35.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.544 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:47.544 Nvme1n1 : 1.01 7920.87 30.94 0.00 0.00 16114.95 4462.69 24841.26 00:31:47.544 [2024-12-10T04:56:35.440Z] =================================================================================================================== 00:31:47.544 [2024-12-10T04:56:35.440Z] Total : 7920.87 30.94 0.00 0.00 16114.95 4462.69 24841.26 00:31:47.544 12933.00 IOPS, 50.52 MiB/s 00:31:47.544 Latency(us) 00:31:47.544 [2024-12-10T04:56:35.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.544 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:47.544 Nvme1n1 : 1.01 13031.59 50.90 0.00 0.00 9799.33 3292.40 14230.67 00:31:47.544 [2024-12-10T04:56:35.440Z] =================================================================================================================== 00:31:47.544 [2024-12-10T04:56:35.440Z] Total : 13031.59 50.90 0.00 0.00 9799.33 3292.40 14230.67 00:31:47.544 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1393901 00:31:47.544 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1393903 00:31:47.544 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1393906 00:31:47.544 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:47.544 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.544 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:47.544 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.544 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:47.544 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:47.545 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.545 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:47.545 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.545 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:47.545 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.545 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.545 rmmod nvme_tcp 00:31:47.545 rmmod nvme_fabrics 00:31:47.803 rmmod nvme_keyring 00:31:47.803 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.803 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:47.803 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:47.803 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1393877 ']' 00:31:47.803 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1393877 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1393877 ']' 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1393877 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1393877 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1393877' 00:31:47.804 killing process with pid 1393877 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1393877 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1393877 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.804 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.340 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:50.340 00:31:50.340 real 0m10.848s 00:31:50.340 user 0m15.610s 00:31:50.340 sys 0m6.382s 00:31:50.340 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.340 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:50.340 ************************************ 00:31:50.340 END TEST nvmf_bdev_io_wait 00:31:50.340 ************************************ 00:31:50.340 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:50.340 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:50.340 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:50.340 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:50.340 ************************************ 00:31:50.340 START TEST nvmf_queue_depth 00:31:50.340 ************************************ 00:31:50.340 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:50.340 * Looking for test storage... 00:31:50.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:50.340 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.341 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:50.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.341 --rc genhtml_branch_coverage=1 00:31:50.341 --rc genhtml_function_coverage=1 00:31:50.341 --rc genhtml_legend=1 00:31:50.341 --rc geninfo_all_blocks=1 00:31:50.341 --rc geninfo_unexecuted_blocks=1 00:31:50.341 00:31:50.341 ' 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:50.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.341 --rc genhtml_branch_coverage=1 00:31:50.341 --rc genhtml_function_coverage=1 00:31:50.341 --rc genhtml_legend=1 00:31:50.341 --rc geninfo_all_blocks=1 00:31:50.341 --rc geninfo_unexecuted_blocks=1 00:31:50.341 00:31:50.341 ' 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:50.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.341 --rc genhtml_branch_coverage=1 00:31:50.341 --rc genhtml_function_coverage=1 00:31:50.341 --rc genhtml_legend=1 00:31:50.341 --rc geninfo_all_blocks=1 00:31:50.341 --rc geninfo_unexecuted_blocks=1 00:31:50.341 00:31:50.341 ' 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:50.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.341 --rc genhtml_branch_coverage=1 00:31:50.341 --rc genhtml_function_coverage=1 00:31:50.341 --rc genhtml_legend=1 00:31:50.341 --rc geninfo_all_blocks=1 00:31:50.341 --rc geninfo_unexecuted_blocks=1 00:31:50.341 00:31:50.341 ' 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.341 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.342 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.915 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:56.916 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:56.916 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:56.916 Found net devices under 0000:af:00.0: cvl_0_0 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:56.916 Found net devices under 0000:af:00.1: cvl_0_1 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.916 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:31:56.916 00:31:56.916 --- 10.0.0.2 ping statistics --- 00:31:56.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.917 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:31:56.917 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:31:56.917 00:31:56.917 --- 10.0.0.1 ping statistics --- 00:31:56.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.917 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:31:56.917 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.917 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:56.917 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:56.917 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.917 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:56.917 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:56.917 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.917 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1397832 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1397832 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1397832 ']' 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:56.917 [2024-12-10 05:56:44.092803] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:56.917 [2024-12-10 05:56:44.093694] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:31:56.917 [2024-12-10 05:56:44.093726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.917 [2024-12-10 05:56:44.172537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.917 [2024-12-10 05:56:44.211650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.917 [2024-12-10 05:56:44.211683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.917 [2024-12-10 05:56:44.211690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.917 [2024-12-10 05:56:44.211696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.917 [2024-12-10 05:56:44.211701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.917 [2024-12-10 05:56:44.212155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.917 [2024-12-10 05:56:44.278263] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:56.917 [2024-12-10 05:56:44.278481] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:56.917 [2024-12-10 05:56:44.344805] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:56.917 Malloc0 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:56.917 [2024-12-10 05:56:44.416926] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1397852 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1397852 /var/tmp/bdevperf.sock 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1397852 ']' 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:56.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:56.917 [2024-12-10 05:56:44.465149] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:31:56.917 [2024-12-10 05:56:44.465193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1397852 ] 00:31:56.917 [2024-12-10 05:56:44.538214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.917 [2024-12-10 05:56:44.579242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.917 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:56.918 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:56.918 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.918 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:57.177 NVMe0n1 00:31:57.177 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.177 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:57.177 Running I/O for 10 seconds... 00:31:59.493 12111.00 IOPS, 47.31 MiB/s [2024-12-10T04:56:48.326Z] 12281.50 IOPS, 47.97 MiB/s [2024-12-10T04:56:49.264Z] 12290.33 IOPS, 48.01 MiB/s [2024-12-10T04:56:50.202Z] 12291.50 IOPS, 48.01 MiB/s [2024-12-10T04:56:51.140Z] 12289.00 IOPS, 48.00 MiB/s [2024-12-10T04:56:52.078Z] 12350.33 IOPS, 48.24 MiB/s [2024-12-10T04:56:53.457Z] 12374.43 IOPS, 48.34 MiB/s [2024-12-10T04:56:54.026Z] 12408.50 IOPS, 48.47 MiB/s [2024-12-10T04:56:55.404Z] 12406.00 IOPS, 48.46 MiB/s [2024-12-10T04:56:55.404Z] 12445.70 IOPS, 48.62 MiB/s 00:32:07.508 Latency(us) 00:32:07.508 [2024-12-10T04:56:55.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.508 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:07.508 Verification LBA range: start 0x0 length 0x4000 00:32:07.508 NVMe0n1 : 10.07 12470.37 48.71 0.00 0.00 81823.28 18724.57 53427.44 00:32:07.508 [2024-12-10T04:56:55.404Z] =================================================================================================================== 00:32:07.508 [2024-12-10T04:56:55.404Z] Total : 12470.37 48.71 0.00 0.00 81823.28 18724.57 53427.44 00:32:07.508 { 00:32:07.508 "results": [ 00:32:07.508 { 00:32:07.508 "job": "NVMe0n1", 00:32:07.508 "core_mask": "0x1", 00:32:07.508 "workload": "verify", 00:32:07.508 "status": "finished", 00:32:07.508 "verify_range": { 00:32:07.508 "start": 0, 00:32:07.508 "length": 16384 00:32:07.508 }, 00:32:07.508 "queue_depth": 1024, 00:32:07.508 "io_size": 4096, 00:32:07.508 "runtime": 10.065862, 00:32:07.508 "iops": 12470.367664488149, 00:32:07.508 "mibps": 48.71237368940683, 00:32:07.508 "io_failed": 0, 00:32:07.508 "io_timeout": 0, 00:32:07.508 "avg_latency_us": 81823.27639460172, 00:32:07.508 "min_latency_us": 18724.571428571428, 00:32:07.508 "max_latency_us": 53427.44380952381 00:32:07.508 } 00:32:07.508 ], 00:32:07.508 "core_count": 1 00:32:07.508 } 00:32:07.508 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1397852 00:32:07.508 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1397852 ']' 00:32:07.508 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1397852 00:32:07.508 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:07.508 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.508 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1397852 00:32:07.508 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:07.508 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1397852' 00:32:07.509 killing process with pid 1397852 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1397852 00:32:07.509 Received shutdown signal, test time was about 10.000000 seconds 00:32:07.509 00:32:07.509 Latency(us) 00:32:07.509 [2024-12-10T04:56:55.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.509 [2024-12-10T04:56:55.405Z] =================================================================================================================== 00:32:07.509 [2024-12-10T04:56:55.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1397852 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:07.509 rmmod nvme_tcp 00:32:07.509 rmmod nvme_fabrics 00:32:07.509 rmmod nvme_keyring 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1397832 ']' 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1397832 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1397832 ']' 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1397832 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.509 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1397832 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1397832' 00:32:07.768 killing process with pid 1397832 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1397832 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1397832 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.768 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:10.306 00:32:10.306 real 0m19.856s 00:32:10.306 user 0m22.846s 00:32:10.306 sys 0m6.311s 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:10.306 ************************************ 00:32:10.306 END TEST nvmf_queue_depth 00:32:10.306 ************************************ 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:10.306 ************************************ 00:32:10.306 START TEST nvmf_target_multipath 00:32:10.306 ************************************ 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:10.306 * Looking for test storage... 00:32:10.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:10.306 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:10.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.307 --rc genhtml_branch_coverage=1 00:32:10.307 --rc genhtml_function_coverage=1 00:32:10.307 --rc genhtml_legend=1 00:32:10.307 --rc geninfo_all_blocks=1 00:32:10.307 --rc geninfo_unexecuted_blocks=1 00:32:10.307 00:32:10.307 ' 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:10.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.307 --rc genhtml_branch_coverage=1 00:32:10.307 --rc genhtml_function_coverage=1 00:32:10.307 --rc genhtml_legend=1 00:32:10.307 --rc geninfo_all_blocks=1 00:32:10.307 --rc geninfo_unexecuted_blocks=1 00:32:10.307 00:32:10.307 ' 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:10.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.307 --rc genhtml_branch_coverage=1 00:32:10.307 --rc genhtml_function_coverage=1 00:32:10.307 --rc genhtml_legend=1 00:32:10.307 --rc geninfo_all_blocks=1 00:32:10.307 --rc geninfo_unexecuted_blocks=1 00:32:10.307 00:32:10.307 ' 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:10.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.307 --rc genhtml_branch_coverage=1 00:32:10.307 --rc genhtml_function_coverage=1 00:32:10.307 --rc genhtml_legend=1 00:32:10.307 --rc geninfo_all_blocks=1 00:32:10.307 --rc geninfo_unexecuted_blocks=1 00:32:10.307 00:32:10.307 ' 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.307 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:10.308 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:16.880 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.880 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:16.881 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:16.881 Found net devices under 0000:af:00.0: cvl_0_0 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:16.881 Found net devices under 0000:af:00.1: cvl_0_1 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:16.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:16.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:32:16.881 00:32:16.881 --- 10.0.0.2 ping statistics --- 00:32:16.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.881 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:16.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:16.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:32:16.881 00:32:16.881 --- 10.0.0.1 ping statistics --- 00:32:16.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.881 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:16.881 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:16.882 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:16.882 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:16.882 only one NIC for nvmf test 00:32:16.882 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:16.882 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:16.882 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:16.882 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:16.882 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:16.882 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:16.882 05:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:16.882 rmmod nvme_tcp 00:32:16.882 rmmod nvme_fabrics 00:32:16.882 rmmod nvme_keyring 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.882 05:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:18.339 00:32:18.339 real 0m8.397s 00:32:18.339 user 0m1.759s 00:32:18.339 sys 0m4.535s 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:18.339 ************************************ 00:32:18.339 END TEST nvmf_target_multipath 00:32:18.339 ************************************ 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:18.339 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:18.599 ************************************ 00:32:18.599 START TEST nvmf_zcopy 00:32:18.599 ************************************ 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:18.599 * Looking for test storage... 00:32:18.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:18.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.599 --rc genhtml_branch_coverage=1 00:32:18.599 --rc genhtml_function_coverage=1 00:32:18.599 --rc genhtml_legend=1 00:32:18.599 --rc geninfo_all_blocks=1 00:32:18.599 --rc geninfo_unexecuted_blocks=1 00:32:18.599 00:32:18.599 ' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:18.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.599 --rc genhtml_branch_coverage=1 00:32:18.599 --rc genhtml_function_coverage=1 00:32:18.599 --rc genhtml_legend=1 00:32:18.599 --rc geninfo_all_blocks=1 00:32:18.599 --rc geninfo_unexecuted_blocks=1 00:32:18.599 00:32:18.599 ' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:18.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.599 --rc genhtml_branch_coverage=1 00:32:18.599 --rc genhtml_function_coverage=1 00:32:18.599 --rc genhtml_legend=1 00:32:18.599 --rc geninfo_all_blocks=1 00:32:18.599 --rc geninfo_unexecuted_blocks=1 00:32:18.599 00:32:18.599 ' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:18.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.599 --rc genhtml_branch_coverage=1 00:32:18.599 --rc genhtml_function_coverage=1 00:32:18.599 --rc genhtml_legend=1 00:32:18.599 --rc geninfo_all_blocks=1 00:32:18.599 --rc geninfo_unexecuted_blocks=1 00:32:18.599 00:32:18.599 ' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:18.599 05:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:25.170 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:25.170 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:25.170 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:25.171 Found net devices under 0000:af:00.0: cvl_0_0 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:25.171 Found net devices under 0000:af:00.1: cvl_0_1 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:25.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:25.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:32:25.171 00:32:25.171 --- 10.0.0.2 ping statistics --- 00:32:25.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.171 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:25.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:25.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:32:25.171 00:32:25.171 --- 10.0.0.1 ping statistics --- 00:32:25.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.171 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1407072 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1407072 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1407072 ']' 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.171 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.171 [2024-12-10 05:57:12.433208] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:25.171 [2024-12-10 05:57:12.434102] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:32:25.171 [2024-12-10 05:57:12.434137] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.171 [2024-12-10 05:57:12.512990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.171 [2024-12-10 05:57:12.551718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.171 [2024-12-10 05:57:12.551750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.171 [2024-12-10 05:57:12.551758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.171 [2024-12-10 05:57:12.551764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.171 [2024-12-10 05:57:12.551770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.171 [2024-12-10 05:57:12.552227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.171 [2024-12-10 05:57:12.618285] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:25.172 [2024-12-10 05:57:12.618491] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.172 [2024-12-10 05:57:12.696913] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.172 [2024-12-10 05:57:12.725138] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.172 malloc0 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:25.172 { 00:32:25.172 "params": { 00:32:25.172 "name": "Nvme$subsystem", 00:32:25.172 "trtype": "$TEST_TRANSPORT", 00:32:25.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.172 "adrfam": "ipv4", 00:32:25.172 "trsvcid": "$NVMF_PORT", 00:32:25.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.172 "hdgst": ${hdgst:-false}, 00:32:25.172 "ddgst": ${ddgst:-false} 00:32:25.172 }, 00:32:25.172 "method": "bdev_nvme_attach_controller" 00:32:25.172 } 00:32:25.172 EOF 00:32:25.172 )") 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:25.172 05:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:25.172 "params": { 00:32:25.172 "name": "Nvme1", 00:32:25.172 "trtype": "tcp", 00:32:25.172 "traddr": "10.0.0.2", 00:32:25.172 "adrfam": "ipv4", 00:32:25.172 "trsvcid": "4420", 00:32:25.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:25.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:25.172 "hdgst": false, 00:32:25.172 "ddgst": false 00:32:25.172 }, 00:32:25.172 "method": "bdev_nvme_attach_controller" 00:32:25.172 }' 00:32:25.172 [2024-12-10 05:57:12.822410] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:32:25.172 [2024-12-10 05:57:12.822452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407104 ] 00:32:25.172 [2024-12-10 05:57:12.897365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.172 [2024-12-10 05:57:12.936500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.431 Running I/O for 10 seconds... 00:32:27.311 8547.00 IOPS, 66.77 MiB/s [2024-12-10T04:57:16.143Z] 8614.50 IOPS, 67.30 MiB/s [2024-12-10T04:57:17.521Z] 8649.33 IOPS, 67.57 MiB/s [2024-12-10T04:57:18.458Z] 8661.25 IOPS, 67.67 MiB/s [2024-12-10T04:57:19.395Z] 8675.00 IOPS, 67.77 MiB/s [2024-12-10T04:57:20.333Z] 8686.50 IOPS, 67.86 MiB/s [2024-12-10T04:57:21.270Z] 8684.43 IOPS, 67.85 MiB/s [2024-12-10T04:57:22.206Z] 8675.38 IOPS, 67.78 MiB/s [2024-12-10T04:57:23.143Z] 8681.56 IOPS, 67.82 MiB/s [2024-12-10T04:57:23.143Z] 8681.60 IOPS, 67.83 MiB/s 00:32:35.247 Latency(us) 00:32:35.247 [2024-12-10T04:57:23.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.247 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:35.247 Verification LBA range: start 0x0 length 0x1000 00:32:35.247 Nvme1n1 : 10.01 8686.10 67.86 0.00 0.00 14694.30 2340.57 21470.84 00:32:35.247 [2024-12-10T04:57:23.143Z] =================================================================================================================== 00:32:35.247 [2024-12-10T04:57:23.143Z] Total : 8686.10 67.86 0.00 0.00 14694.30 2340.57 21470.84 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1408661 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:35.506 { 00:32:35.506 "params": { 00:32:35.506 "name": "Nvme$subsystem", 00:32:35.506 "trtype": "$TEST_TRANSPORT", 00:32:35.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.506 "adrfam": "ipv4", 00:32:35.506 "trsvcid": "$NVMF_PORT", 00:32:35.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.506 "hdgst": ${hdgst:-false}, 00:32:35.506 "ddgst": ${ddgst:-false} 00:32:35.506 }, 00:32:35.506 "method": "bdev_nvme_attach_controller" 00:32:35.506 } 00:32:35.506 EOF 00:32:35.506 )") 00:32:35.506 [2024-12-10 05:57:23.284596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.506 [2024-12-10 05:57:23.284630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:35.506 [2024-12-10 05:57:23.292556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.506 [2024-12-10 05:57:23.292568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:35.506 05:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:35.506 "params": { 00:32:35.506 "name": "Nvme1", 00:32:35.506 "trtype": "tcp", 00:32:35.506 "traddr": "10.0.0.2", 00:32:35.506 "adrfam": "ipv4", 00:32:35.506 "trsvcid": "4420", 00:32:35.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:35.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:35.506 "hdgst": false, 00:32:35.506 "ddgst": false 00:32:35.506 }, 00:32:35.506 "method": "bdev_nvme_attach_controller" 00:32:35.506 }' 00:32:35.506 [2024-12-10 05:57:23.300550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.506 [2024-12-10 05:57:23.300561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.506 [2024-12-10 05:57:23.308549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.506 [2024-12-10 05:57:23.308559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.506 [2024-12-10 05:57:23.316550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.506 [2024-12-10 05:57:23.316561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.506 [2024-12-10 05:57:23.322555] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:32:35.506 [2024-12-10 05:57:23.322597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408661 ] 00:32:35.506 [2024-12-10 05:57:23.328550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.506 [2024-12-10 05:57:23.328560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.507 [2024-12-10 05:57:23.340549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.507 [2024-12-10 05:57:23.340560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.507 [2024-12-10 05:57:23.352551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.507 [2024-12-10 05:57:23.352562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.507 [2024-12-10 05:57:23.364547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.507 [2024-12-10 05:57:23.364557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.507 [2024-12-10 05:57:23.376550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.507 [2024-12-10 05:57:23.376559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.507 [2024-12-10 05:57:23.384547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.507 [2024-12-10 05:57:23.384557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.507 [2024-12-10 05:57:23.392549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.507 [2024-12-10 05:57:23.392558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.397905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.766 [2024-12-10 05:57:23.400550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.400560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.408552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.408565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.416549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.416559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.424549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.424577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.432553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.432565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.439449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.766 [2024-12-10 05:57:23.440553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.440566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.448549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.448564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.456557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.456575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.464554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.464569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.472553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.472566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.480551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.480564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.488551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.488562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.496551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.496563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.504552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.504564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.512548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.512559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.520548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.520557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.528567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.528588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.536591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.536606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.544554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.544567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.552555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.552568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.560551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.560565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.568552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.568564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.576548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.576557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.584547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.584556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.592546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.592555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.600552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.600569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.608554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.608567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.616549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.616561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.624555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.624571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.632550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.632562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 Running I/O for 5 seconds... 00:32:35.766 [2024-12-10 05:57:23.644552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.644571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:35.766 [2024-12-10 05:57:23.652061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:35.766 [2024-12-10 05:57:23.652080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.665779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.665798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.676709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.676727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.683313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.683332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.697737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.697756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.709073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.709091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.722499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.722516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.737413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.737431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.747807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.747825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.761964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.761982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.772402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.772419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.786121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.786139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.796233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.796252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.809817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.809838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.821110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.821129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.834529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.834547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.841281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.841301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.852259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.852279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.866798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.866817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.881388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.881407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.892203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.892222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.905864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.905883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.026 [2024-12-10 05:57:23.915815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.026 [2024-12-10 05:57:23.915833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:23.929908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:23.929927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:23.940954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:23.940971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:23.954232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:23.954250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:23.962855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:23.962873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:23.977740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:23.977759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:23.988589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:23.988608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:23.995443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:23.995462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.008926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.008944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.021658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.021677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.032715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.032734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.039327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.039345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.053291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.053309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.065905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.065924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.075643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.075662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.090139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.090157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.100293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.100312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.114415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.114433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.124537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.124556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.131197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.131215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.145505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.145523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.158250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.158269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.285 [2024-12-10 05:57:24.168779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.285 [2024-12-10 05:57:24.168797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.544 [2024-12-10 05:57:24.182325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.544 [2024-12-10 05:57:24.182351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.544 [2024-12-10 05:57:24.189236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.544 [2024-12-10 05:57:24.189254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.544 [2024-12-10 05:57:24.200280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.544 [2024-12-10 05:57:24.200298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.544 [2024-12-10 05:57:24.214155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.544 [2024-12-10 05:57:24.214181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.544 [2024-12-10 05:57:24.225014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.544 [2024-12-10 05:57:24.225032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.544 [2024-12-10 05:57:24.237876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.544 [2024-12-10 05:57:24.237899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.544 [2024-12-10 05:57:24.248270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.544 [2024-12-10 05:57:24.248289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.544 [2024-12-10 05:57:24.262040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.544 [2024-12-10 05:57:24.262059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.544 [2024-12-10 05:57:24.273015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.544 [2024-12-10 05:57:24.273033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.544 [2024-12-10 05:57:24.286171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.544 [2024-12-10 05:57:24.286189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.544 [2024-12-10 05:57:24.295959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.295978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.310319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.310338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.320030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.320049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.334633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.334651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.343841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.343860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.358267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.358285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.367033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.367051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.381559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.381577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.390729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.390747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.397230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.397248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.408195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.408213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.422292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.422310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.545 [2024-12-10 05:57:24.432050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.545 [2024-12-10 05:57:24.432069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.446829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.446847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.461184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.461210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.473447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.473466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.486293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.486312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.494918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.494936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.509547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.509567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.519774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.519793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.534460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.534479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.543939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.543958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.558473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.558492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.568456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.568475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.575242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.575260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.589985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.590004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.600046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.600064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.614850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.614869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.629059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.629077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 16914.00 IOPS, 132.14 MiB/s [2024-12-10T04:57:24.700Z] [2024-12-10 05:57:24.641266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.641284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.654449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.654468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.663758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.663777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.678292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.678310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.804 [2024-12-10 05:57:24.687690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:36.804 [2024-12-10 05:57:24.687713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.063 [2024-12-10 05:57:24.702120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.063 [2024-12-10 05:57:24.702138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.063 [2024-12-10 05:57:24.712004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.712023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.726027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.726047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.735754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.735772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.750580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.750601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.758248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.758267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.768599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.768618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.775099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.775118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.789932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.789952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.800945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.800964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.814182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.814201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.824068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.824088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.838945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.838966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.853805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.853824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.864463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.864482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.871152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.871179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.885768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.885788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.896881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.896899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.910341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.910368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.917343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.917362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.928703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.928722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.935074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.935092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.064 [2024-12-10 05:57:24.943364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.064 [2024-12-10 05:57:24.943384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:24.957937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:24.957956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:24.967882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:24.967902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:24.982094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:24.982113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:24.991003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:24.991022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:24.997579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:24.997598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.008459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.008479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.014996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.015014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.022904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.022923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.037553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.037572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.048610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.048630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.055447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.055466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.069691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.069711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.080882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.080901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.094561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.094580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.101570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.101588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.112716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.112736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.119382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.119401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.133439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.133458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.143299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.143318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.157968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.157986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.166764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.166783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.173247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.173265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.184101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.324 [2024-12-10 05:57:25.184119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.324 [2024-12-10 05:57:25.198958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.325 [2024-12-10 05:57:25.198977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.325 [2024-12-10 05:57:25.213533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.325 [2024-12-10 05:57:25.213552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.223548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.223566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.238280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.238298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.249155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.249179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.261764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.261783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.274128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.274147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.284992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.285011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.298393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.298411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.308216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.308235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.322030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.322049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.332908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.332926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.346256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.346275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.353976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.353993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.363132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.363150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.378117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.378136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.387854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.387872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.402282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.402300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.411547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.411565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.425970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.425988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.435992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.436011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.450525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.450544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.584 [2024-12-10 05:57:25.465291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.584 [2024-12-10 05:57:25.465309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.475941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.475959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.490348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.490367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.498954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.498972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.505871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.505888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.515517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.515535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.530081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.530100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.539709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.539727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.554627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.554645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.569047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.569065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.579843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.579861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.594154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.594183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.603935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.603954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.618204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.618223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.625929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.625947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.640302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.640320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 16888.50 IOPS, 131.94 MiB/s [2024-12-10T04:57:25.739Z] [2024-12-10 05:57:25.653966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.653984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.664720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.664738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.678627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.678646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.687361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.687378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.702071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.702089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.710968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.710986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.843 [2024-12-10 05:57:25.725592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:37.843 [2024-12-10 05:57:25.725610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.102 [2024-12-10 05:57:25.734746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.734764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.741240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.741258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.752355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.752378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.766367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.766386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.774960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.774978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.781451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.781469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.792518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.792537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.799029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.799048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.807401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.807419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.821748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.821766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.832361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.832378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.846313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.846341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.856228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.856246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.870271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.870291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.878887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.878906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.885840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.885859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.896697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.896716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.903347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.903365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.917483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.917501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.929635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.929654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.941009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.941027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.954215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.954238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.964932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.964950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.978240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.978258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.103 [2024-12-10 05:57:25.988398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.103 [2024-12-10 05:57:25.988417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.002131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.362 [2024-12-10 05:57:26.002149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.012250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.362 [2024-12-10 05:57:26.012269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.026040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.362 [2024-12-10 05:57:26.026058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.035760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.362 [2024-12-10 05:57:26.035778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.049912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.362 [2024-12-10 05:57:26.049930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.059616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.362 [2024-12-10 05:57:26.059635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.074378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.362 [2024-12-10 05:57:26.074396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.084145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.362 [2024-12-10 05:57:26.084163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.098238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.362 [2024-12-10 05:57:26.098256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.108336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.362 [2024-12-10 05:57:26.108355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.122510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.362 [2024-12-10 05:57:26.122528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.362 [2024-12-10 05:57:26.136701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.363 [2024-12-10 05:57:26.136720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.363 [2024-12-10 05:57:26.144078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.363 [2024-12-10 05:57:26.144098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.363 [2024-12-10 05:57:26.157282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.363 [2024-12-10 05:57:26.157301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.363 [2024-12-10 05:57:26.167066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.363 [2024-12-10 05:57:26.167085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.363 [2024-12-10 05:57:26.181712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.363 [2024-12-10 05:57:26.181737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.363 [2024-12-10 05:57:26.191770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.363 [2024-12-10 05:57:26.191789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.363 [2024-12-10 05:57:26.206112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.363 [2024-12-10 05:57:26.206131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.363 [2024-12-10 05:57:26.216551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.363 [2024-12-10 05:57:26.216570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.363 [2024-12-10 05:57:26.223114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.363 [2024-12-10 05:57:26.223135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.363 [2024-12-10 05:57:26.237348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.363 [2024-12-10 05:57:26.237367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.363 [2024-12-10 05:57:26.250008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.363 [2024-12-10 05:57:26.250026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.259871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.259889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.274131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.274150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.284343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.284362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.298110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.298129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.308408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.308427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.322329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.322348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.331887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.331906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.346555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.346574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.356106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.356125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.370480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.370500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.384941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.384960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.397226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.397245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.410118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.410136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.420321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.420341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.434552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.434571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.448787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.448805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.456329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.456347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.469966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.469984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.622 [2024-12-10 05:57:26.478648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.622 [2024-12-10 05:57:26.478668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.623 [2024-12-10 05:57:26.485324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.623 [2024-12-10 05:57:26.485342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.623 [2024-12-10 05:57:26.495727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.623 [2024-12-10 05:57:26.495746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.623 [2024-12-10 05:57:26.510152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.623 [2024-12-10 05:57:26.510179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.519853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.519872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.534532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.534551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.544638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.544657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.551234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.551253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.565597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.565616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.575458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.575476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.589678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.589696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.600135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.600153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.614078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.614096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.622793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.622813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.629249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.629268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.640316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.640335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 16914.67 IOPS, 132.15 MiB/s [2024-12-10T04:57:26.778Z] [2024-12-10 05:57:26.654423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.654452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.668776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.668795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.676015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.676034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.690544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.690563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.699434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.699453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.713749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.713767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.723646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.723666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.738123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.738141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.747763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.747782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.762127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.762145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.882 [2024-12-10 05:57:26.772673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:38.882 [2024-12-10 05:57:26.772692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.779282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.779300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.793588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.793607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.804757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.804775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.811326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.811345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.825606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.825629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.838139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.838158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.848957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.848976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.862231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.862250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.869315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.869333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.880521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.880541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.894520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.894540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.901475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.901493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.912685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.912705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.918902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.918920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.926856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.926874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.941348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.941368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.951685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.951703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.966084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.966102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.974850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.974868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.981629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.981647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:26.991451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:26.991470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:27.005982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:27.006001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:27.016220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:27.016239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.141 [2024-12-10 05:57:27.030442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.141 [2024-12-10 05:57:27.030467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.400 [2024-12-10 05:57:27.039259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.400 [2024-12-10 05:57:27.039278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.400 [2024-12-10 05:57:27.053890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.400 [2024-12-10 05:57:27.053908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.400 [2024-12-10 05:57:27.064949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.064967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.078106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.078124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.088918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.088936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.101761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.101779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.112759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.112776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.126668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.126686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.141386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.141405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.151774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.151792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.166432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.166450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.176784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.176803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.183582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.183600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.197505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.197524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.209923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.209942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.220773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.220790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.234391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.234409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.244519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.244539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.251273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.251296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.266108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.266127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.275731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.275749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.401 [2024-12-10 05:57:27.290069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.401 [2024-12-10 05:57:27.290089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.300451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.300470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.306928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.306947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.315026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.315043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.329263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.329281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.338901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.338919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.345568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.345586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.356402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.356420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.370265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.370284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.380438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.380456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.394433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.394453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.401704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.401722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.411115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.411134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.417950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.417969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.428683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.428701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.435625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.435643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.447606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.447628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.462381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.462400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.471148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.471175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.485706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.485725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.494840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.660 [2024-12-10 05:57:27.494859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.660 [2024-12-10 05:57:27.501352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.661 [2024-12-10 05:57:27.501370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.661 [2024-12-10 05:57:27.511961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.661 [2024-12-10 05:57:27.511980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.661 [2024-12-10 05:57:27.526557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.661 [2024-12-10 05:57:27.526576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.661 [2024-12-10 05:57:27.541037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.661 [2024-12-10 05:57:27.541055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.919 [2024-12-10 05:57:27.552306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.552325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.566649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.566669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.574200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.574220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.584187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.584208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.598828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.598849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.613646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.613665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.624619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.624639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.631378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.631397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.643594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.643613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 16935.75 IOPS, 132.31 MiB/s [2024-12-10T04:57:27.816Z] [2024-12-10 05:57:27.658149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.658176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.667772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.667791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.682480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.682499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.691937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.691956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.706679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.706698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.713888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.713907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.723366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.723386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.738074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.738092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.748762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.748780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.762199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.762218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.771319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.771340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.785745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.785764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.795767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.795786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.920 [2024-12-10 05:57:27.810459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:39.920 [2024-12-10 05:57:27.810478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.818052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.818070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.827266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.827284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.841742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.841761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.851404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.851423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.865934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.865953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.876201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.876220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.890276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.890294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.897732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.897751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.908093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.908112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.922245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.922266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.932310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.932328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.946151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.946176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.955930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.955948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.970163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.970186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.979024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.979042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.985777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.985795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:27.996523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:27.996541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:28.010679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:28.010697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:28.025120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:28.025138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.179 [2024-12-10 05:57:28.037350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.179 [2024-12-10 05:57:28.037368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.180 [2024-12-10 05:57:28.050351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.180 [2024-12-10 05:57:28.050369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.180 [2024-12-10 05:57:28.058864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.180 [2024-12-10 05:57:28.058883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.073758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.073777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.083150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.083176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.089966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.089984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.100353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.100372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.114387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.114405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.129016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.129034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.139805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.139823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.154326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.154355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.164225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.164243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.178773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.178791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.192774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.192792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.200100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.200118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.211537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.211556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.226283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.226302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.236511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.236529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.242802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.242821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.250954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.250972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.264975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.264993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.275483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.275501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.290149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.290172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.300152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.300176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.313901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.313925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.439 [2024-12-10 05:57:28.323736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.439 [2024-12-10 05:57:28.323754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.338452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.338470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.348411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.348429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.362390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.362409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.371746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.371764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.386657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.386675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.394234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.394251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.403482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.403500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.417980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.417998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.428283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.428302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.442144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.442163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.451588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.451607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.466100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.466118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.475230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.475248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.489777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.489796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.501137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.501155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.514248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.514267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.524809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.524826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.538265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.538289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.547626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.547645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.562097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.562116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.571000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.571019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.577458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.577476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.699 [2024-12-10 05:57:28.588234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.699 [2024-12-10 05:57:28.588252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.958 [2024-12-10 05:57:28.602737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.958 [2024-12-10 05:57:28.602755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.958 [2024-12-10 05:57:28.617088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.958 [2024-12-10 05:57:28.617106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.958 [2024-12-10 05:57:28.626989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.958 [2024-12-10 05:57:28.627008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.958 [2024-12-10 05:57:28.633752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.958 [2024-12-10 05:57:28.633769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.958 [2024-12-10 05:57:28.644282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.958 [2024-12-10 05:57:28.644301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.958 16941.00 IOPS, 132.35 MiB/s [2024-12-10T04:57:28.854Z] [2024-12-10 05:57:28.656234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.958 [2024-12-10 05:57:28.656253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.958 00:32:40.958 Latency(us) 00:32:40.958 [2024-12-10T04:57:28.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.958 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:40.958 Nvme1n1 : 5.01 16944.03 132.38 0.00 0.00 7546.83 1903.66 13294.45 00:32:40.958 [2024-12-10T04:57:28.854Z] =================================================================================================================== 00:32:40.958 [2024-12-10T04:57:28.854Z] Total : 16944.03 132.38 0.00 0.00 7546.83 1903.66 13294.45 00:32:40.958 [2024-12-10 05:57:28.660552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.958 [2024-12-10 05:57:28.660564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.958 [2024-12-10 05:57:28.668555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.958 [2024-12-10 05:57:28.668570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.676552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.676562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.684561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.684580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.692559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.692583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.700554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.700569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.708555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.708569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.716556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.716569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.732556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.732575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.740556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.740573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.748553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.748567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.756553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.756568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.764552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.764566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.772550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.772560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.780550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.780562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.788553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.788566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.796552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.796562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.804550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.804560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 [2024-12-10 05:57:28.812549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:40.959 [2024-12-10 05:57:28.812559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:40.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1408661) - No such process 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1408661 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:40.959 delay0 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.959 05:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:41.218 [2024-12-10 05:57:28.953420] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:49.340 Initializing NVMe Controllers 00:32:49.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:49.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:49.340 Initialization complete. Launching workers. 00:32:49.340 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 264, failed: 19451 00:32:49.340 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19617, failed to submit 98 00:32:49.340 success 19546, unsuccessful 71, failed 0 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.340 rmmod nvme_tcp 00:32:49.340 rmmod nvme_fabrics 00:32:49.340 rmmod nvme_keyring 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1407072 ']' 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1407072 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1407072 ']' 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1407072 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1407072 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1407072' 00:32:49.340 killing process with pid 1407072 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1407072 00:32:49.340 05:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1407072 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.340 05:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.276 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.276 00:32:50.276 real 0m31.869s 00:32:50.276 user 0m40.880s 00:32:50.276 sys 0m12.489s 00:32:50.276 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.276 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:50.276 ************************************ 00:32:50.276 END TEST nvmf_zcopy 00:32:50.276 ************************************ 00:32:50.276 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:50.276 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:50.276 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.276 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:50.535 ************************************ 00:32:50.535 START TEST nvmf_nmic 00:32:50.535 ************************************ 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:50.535 * Looking for test storage... 00:32:50.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.535 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:50.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.536 --rc genhtml_branch_coverage=1 00:32:50.536 --rc genhtml_function_coverage=1 00:32:50.536 --rc genhtml_legend=1 00:32:50.536 --rc geninfo_all_blocks=1 00:32:50.536 --rc geninfo_unexecuted_blocks=1 00:32:50.536 00:32:50.536 ' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:50.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.536 --rc genhtml_branch_coverage=1 00:32:50.536 --rc genhtml_function_coverage=1 00:32:50.536 --rc genhtml_legend=1 00:32:50.536 --rc geninfo_all_blocks=1 00:32:50.536 --rc geninfo_unexecuted_blocks=1 00:32:50.536 00:32:50.536 ' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:50.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.536 --rc genhtml_branch_coverage=1 00:32:50.536 --rc genhtml_function_coverage=1 00:32:50.536 --rc genhtml_legend=1 00:32:50.536 --rc geninfo_all_blocks=1 00:32:50.536 --rc geninfo_unexecuted_blocks=1 00:32:50.536 00:32:50.536 ' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:50.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.536 --rc genhtml_branch_coverage=1 00:32:50.536 --rc genhtml_function_coverage=1 00:32:50.536 --rc genhtml_legend=1 00:32:50.536 --rc geninfo_all_blocks=1 00:32:50.536 --rc geninfo_unexecuted_blocks=1 00:32:50.536 00:32:50.536 ' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.536 05:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:57.104 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:57.105 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:57.105 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:57.105 Found net devices under 0000:af:00.0: cvl_0_0 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:57.105 Found net devices under 0000:af:00.1: cvl_0_1 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.105 05:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:32:57.105 00:32:57.105 --- 10.0.0.2 ping statistics --- 00:32:57.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.105 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:32:57.105 00:32:57.105 --- 10.0.0.1 ping statistics --- 00:32:57.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.105 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1414136 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1414136 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1414136 ']' 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.105 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.105 [2024-12-10 05:57:44.312617] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:57.105 [2024-12-10 05:57:44.313543] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:32:57.106 [2024-12-10 05:57:44.313577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.106 [2024-12-10 05:57:44.393132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:57.106 [2024-12-10 05:57:44.434378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.106 [2024-12-10 05:57:44.434419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.106 [2024-12-10 05:57:44.434426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.106 [2024-12-10 05:57:44.434432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.106 [2024-12-10 05:57:44.434436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.106 [2024-12-10 05:57:44.435764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.106 [2024-12-10 05:57:44.435784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:57.106 [2024-12-10 05:57:44.435874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.106 [2024-12-10 05:57:44.435875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:57.106 [2024-12-10 05:57:44.502876] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:57.106 [2024-12-10 05:57:44.503006] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:57.106 [2024-12-10 05:57:44.503670] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:57.106 [2024-12-10 05:57:44.503978] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:57.106 [2024-12-10 05:57:44.504047] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.106 [2024-12-10 05:57:44.576701] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.106 Malloc0 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.106 [2024-12-10 05:57:44.652900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:57.106 test case1: single bdev can't be used in multiple subsystems 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.106 [2024-12-10 05:57:44.684396] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:57.106 [2024-12-10 05:57:44.684418] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:57.106 [2024-12-10 05:57:44.684425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:57.106 request: 00:32:57.106 { 00:32:57.106 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:57.106 "namespace": { 00:32:57.106 "bdev_name": "Malloc0", 00:32:57.106 "no_auto_visible": false, 00:32:57.106 "hide_metadata": false 00:32:57.106 }, 00:32:57.106 "method": "nvmf_subsystem_add_ns", 00:32:57.106 "req_id": 1 00:32:57.106 } 00:32:57.106 Got JSON-RPC error response 00:32:57.106 response: 00:32:57.106 { 00:32:57.106 "code": -32602, 00:32:57.106 "message": "Invalid parameters" 00:32:57.106 } 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:57.106 Adding namespace failed - expected result. 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:57.106 test case2: host connect to nvmf target in multiple paths 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:57.106 [2024-12-10 05:57:44.696484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:57.106 05:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:57.368 05:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:57.368 05:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:57.368 05:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:57.368 05:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:57.368 05:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:59.899 05:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:59.899 05:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:59.899 05:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:59.899 05:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:59.899 05:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:59.899 05:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:59.899 05:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:59.899 [global] 00:32:59.899 thread=1 00:32:59.899 invalidate=1 00:32:59.899 rw=write 00:32:59.899 time_based=1 00:32:59.899 runtime=1 00:32:59.899 ioengine=libaio 00:32:59.899 direct=1 00:32:59.899 bs=4096 00:32:59.899 iodepth=1 00:32:59.899 norandommap=0 00:32:59.899 numjobs=1 00:32:59.899 00:32:59.899 verify_dump=1 00:32:59.899 verify_backlog=512 00:32:59.899 verify_state_save=0 00:32:59.899 do_verify=1 00:32:59.899 verify=crc32c-intel 00:32:59.899 [job0] 00:32:59.899 filename=/dev/nvme0n1 00:32:59.899 Could not set queue depth (nvme0n1) 00:32:59.899 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.899 fio-3.35 00:32:59.899 Starting 1 thread 00:33:00.834 00:33:00.834 job0: (groupid=0, jobs=1): err= 0: pid=1414751: Tue Dec 10 05:57:48 2024 00:33:00.834 read: IOPS=2336, BW=9347KiB/s (9571kB/s)(9356KiB/1001msec) 00:33:00.834 slat (nsec): min=6885, max=22879, avg=7783.94, stdev=1168.07 00:33:00.834 clat (usec): min=192, max=423, avg=237.22, stdev=27.90 00:33:00.834 lat (usec): min=199, max=433, avg=245.00, stdev=27.93 00:33:00.834 clat percentiles (usec): 00:33:00.834 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 212], 00:33:00.834 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 245], 00:33:00.834 | 70.00th=[ 251], 80.00th=[ 273], 90.00th=[ 277], 95.00th=[ 281], 00:33:00.834 | 99.00th=[ 293], 99.50th=[ 318], 99.90th=[ 326], 99.95th=[ 412], 00:33:00.834 | 99.99th=[ 424] 00:33:00.834 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:33:00.834 slat (nsec): min=9578, max=43847, avg=10836.88, stdev=1746.21 00:33:00.834 clat (usec): min=120, max=292, avg=150.37, stdev=30.41 00:33:00.834 lat (usec): min=134, max=304, avg=161.20, stdev=30.47 00:33:00.834 clat percentiles (usec): 00:33:00.834 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:33:00.834 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:33:00.834 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 184], 95.00th=[ 247], 00:33:00.834 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 269], 99.95th=[ 273], 00:33:00.834 | 99.99th=[ 293] 00:33:00.834 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:33:00.834 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:33:00.834 lat (usec) : 250=84.30%, 500=15.70% 00:33:00.834 cpu : usr=3.80%, sys=7.60%, ctx=4899, majf=0, minf=1 00:33:00.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.834 issued rwts: total=2339,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.834 00:33:00.835 Run status group 0 (all jobs): 00:33:00.835 READ: bw=9347KiB/s (9571kB/s), 9347KiB/s-9347KiB/s (9571kB/s-9571kB/s), io=9356KiB (9581kB), run=1001-1001msec 00:33:00.835 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:33:00.835 00:33:00.835 Disk stats (read/write): 00:33:00.835 nvme0n1: ios=2098/2351, merge=0/0, ticks=486/317, in_queue=803, util=91.48% 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:01.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:01.094 rmmod nvme_tcp 00:33:01.094 rmmod nvme_fabrics 00:33:01.094 rmmod nvme_keyring 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1414136 ']' 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1414136 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1414136 ']' 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1414136 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1414136 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1414136' 00:33:01.094 killing process with pid 1414136 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1414136 00:33:01.094 05:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1414136 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.355 05:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:03.969 00:33:03.969 real 0m13.063s 00:33:03.969 user 0m24.255s 00:33:03.969 sys 0m6.127s 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:03.969 ************************************ 00:33:03.969 END TEST nvmf_nmic 00:33:03.969 ************************************ 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:03.969 ************************************ 00:33:03.969 START TEST nvmf_fio_target 00:33:03.969 ************************************ 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:03.969 * Looking for test storage... 00:33:03.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.969 --rc genhtml_branch_coverage=1 00:33:03.969 --rc genhtml_function_coverage=1 00:33:03.969 --rc genhtml_legend=1 00:33:03.969 --rc geninfo_all_blocks=1 00:33:03.969 --rc geninfo_unexecuted_blocks=1 00:33:03.969 00:33:03.969 ' 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.969 --rc genhtml_branch_coverage=1 00:33:03.969 --rc genhtml_function_coverage=1 00:33:03.969 --rc genhtml_legend=1 00:33:03.969 --rc geninfo_all_blocks=1 00:33:03.969 --rc geninfo_unexecuted_blocks=1 00:33:03.969 00:33:03.969 ' 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.969 --rc genhtml_branch_coverage=1 00:33:03.969 --rc genhtml_function_coverage=1 00:33:03.969 --rc genhtml_legend=1 00:33:03.969 --rc geninfo_all_blocks=1 00:33:03.969 --rc geninfo_unexecuted_blocks=1 00:33:03.969 00:33:03.969 ' 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.969 --rc genhtml_branch_coverage=1 00:33:03.969 --rc genhtml_function_coverage=1 00:33:03.969 --rc genhtml_legend=1 00:33:03.969 --rc geninfo_all_blocks=1 00:33:03.969 --rc geninfo_unexecuted_blocks=1 00:33:03.969 00:33:03.969 ' 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.969 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:03.970 05:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:09.245 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:09.245 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.245 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:09.246 Found net devices under 0000:af:00.0: cvl_0_0 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:09.246 Found net devices under 0000:af:00.1: cvl_0_1 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:09.246 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:09.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:33:09.505 00:33:09.505 --- 10.0.0.2 ping statistics --- 00:33:09.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.505 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:09.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:33:09.505 00:33:09.505 --- 10.0.0.1 ping statistics --- 00:33:09.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.505 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1418429 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1418429 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1418429 ']' 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.505 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:09.764 [2024-12-10 05:57:57.435330] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:09.764 [2024-12-10 05:57:57.436208] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:33:09.764 [2024-12-10 05:57:57.436243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.764 [2024-12-10 05:57:57.514639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:09.764 [2024-12-10 05:57:57.553217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.764 [2024-12-10 05:57:57.553257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.764 [2024-12-10 05:57:57.553263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.764 [2024-12-10 05:57:57.553269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.764 [2024-12-10 05:57:57.553274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.764 [2024-12-10 05:57:57.554574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.764 [2024-12-10 05:57:57.554681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:09.764 [2024-12-10 05:57:57.554764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.764 [2024-12-10 05:57:57.554766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:09.764 [2024-12-10 05:57:57.624004] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:09.764 [2024-12-10 05:57:57.625188] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:09.764 [2024-12-10 05:57:57.625405] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:09.764 [2024-12-10 05:57:57.625639] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:09.764 [2024-12-10 05:57:57.625691] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:10.023 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.023 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:33:10.023 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:10.023 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.023 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:10.023 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.023 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:10.023 [2024-12-10 05:57:57.867618] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.282 05:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:10.282 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:10.282 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:10.541 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:10.541 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:10.800 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:10.800 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:11.058 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:11.058 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:11.317 05:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:11.317 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:11.317 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:11.576 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:11.576 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:11.835 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:11.835 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:12.094 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:12.094 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:12.094 05:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:12.352 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:12.352 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:12.611 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:12.869 [2024-12-10 05:58:00.559481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.869 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:13.128 05:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:13.128 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:13.387 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:13.387 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:33:13.387 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:13.387 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:33:13.387 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:33:13.387 05:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:33:15.921 05:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:15.921 05:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:15.921 05:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:15.921 05:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:33:15.921 05:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:15.921 05:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:33:15.921 05:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:15.921 [global] 00:33:15.921 thread=1 00:33:15.921 invalidate=1 00:33:15.921 rw=write 00:33:15.921 time_based=1 00:33:15.921 runtime=1 00:33:15.921 ioengine=libaio 00:33:15.921 direct=1 00:33:15.921 bs=4096 00:33:15.921 iodepth=1 00:33:15.921 norandommap=0 00:33:15.921 numjobs=1 00:33:15.921 00:33:15.921 verify_dump=1 00:33:15.921 verify_backlog=512 00:33:15.921 verify_state_save=0 00:33:15.921 do_verify=1 00:33:15.921 verify=crc32c-intel 00:33:15.921 [job0] 00:33:15.921 filename=/dev/nvme0n1 00:33:15.921 [job1] 00:33:15.921 filename=/dev/nvme0n2 00:33:15.921 [job2] 00:33:15.921 filename=/dev/nvme0n3 00:33:15.921 [job3] 00:33:15.921 filename=/dev/nvme0n4 00:33:15.921 Could not set queue depth (nvme0n1) 00:33:15.921 Could not set queue depth (nvme0n2) 00:33:15.921 Could not set queue depth (nvme0n3) 00:33:15.921 Could not set queue depth (nvme0n4) 00:33:15.921 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:15.921 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:15.921 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:15.921 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:15.921 fio-3.35 00:33:15.921 Starting 4 threads 00:33:17.302 00:33:17.302 job0: (groupid=0, jobs=1): err= 0: pid=1419688: Tue Dec 10 05:58:04 2024 00:33:17.302 read: IOPS=1516, BW=6065KiB/s (6210kB/s)(6180KiB/1019msec) 00:33:17.302 slat (nsec): min=6189, max=25987, avg=7093.51, stdev=1126.62 00:33:17.302 clat (usec): min=180, max=41147, avg=422.89, stdev=2921.35 00:33:17.302 lat (usec): min=187, max=41159, avg=429.98, stdev=2921.97 00:33:17.302 clat percentiles (usec): 00:33:17.302 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:33:17.302 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 212], 00:33:17.302 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 225], 95.00th=[ 235], 00:33:17.302 | 99.00th=[ 277], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:33:17.302 | 99.99th=[41157] 00:33:17.302 write: IOPS=2009, BW=8039KiB/s (8232kB/s)(8192KiB/1019msec); 0 zone resets 00:33:17.302 slat (nsec): min=9368, max=42657, avg=10691.12, stdev=1553.12 00:33:17.302 clat (usec): min=125, max=307, avg=158.26, stdev=17.20 00:33:17.302 lat (usec): min=135, max=342, avg=168.95, stdev=17.62 00:33:17.302 clat percentiles (usec): 00:33:17.302 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:33:17.302 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:33:17.302 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 198], 00:33:17.302 | 99.00th=[ 217], 99.50th=[ 221], 99.90th=[ 269], 99.95th=[ 302], 00:33:17.302 | 99.99th=[ 306] 00:33:17.302 bw ( KiB/s): min= 4096, max=12288, per=30.25%, avg=8192.00, stdev=5792.62, samples=2 00:33:17.302 iops : min= 1024, max= 3072, avg=2048.00, stdev=1448.15, samples=2 00:33:17.302 lat (usec) : 250=98.91%, 500=0.86% 00:33:17.302 lat (msec) : 50=0.22% 00:33:17.302 cpu : usr=1.87%, sys=3.34%, ctx=3594, majf=0, minf=1 00:33:17.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.302 issued rwts: total=1545,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:17.302 job1: (groupid=0, jobs=1): err= 0: pid=1419702: Tue Dec 10 05:58:04 2024 00:33:17.302 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:33:17.302 slat (nsec): min=6466, max=22729, avg=7764.73, stdev=1306.50 00:33:17.302 clat (usec): min=200, max=428, avg=258.84, stdev=33.26 00:33:17.302 lat (usec): min=207, max=437, avg=266.61, stdev=33.94 00:33:17.302 clat percentiles (usec): 00:33:17.302 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:33:17.302 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:33:17.302 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 338], 00:33:17.302 | 99.00th=[ 383], 99.50th=[ 404], 99.90th=[ 429], 99.95th=[ 429], 00:33:17.302 | 99.99th=[ 429] 00:33:17.302 write: IOPS=2269, BW=9079KiB/s (9297kB/s)(9088KiB/1001msec); 0 zone resets 00:33:17.302 slat (nsec): min=9487, max=48131, avg=11224.47, stdev=1874.65 00:33:17.302 clat (usec): min=122, max=421, avg=184.12, stdev=29.97 00:33:17.302 lat (usec): min=132, max=469, avg=195.35, stdev=30.73 00:33:17.302 clat percentiles (usec): 00:33:17.302 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 153], 00:33:17.302 | 30.00th=[ 163], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 194], 00:33:17.302 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 229], 00:33:17.302 | 99.00th=[ 258], 99.50th=[ 273], 99.90th=[ 285], 99.95th=[ 293], 00:33:17.302 | 99.99th=[ 420] 00:33:17.302 bw ( KiB/s): min= 8192, max= 8192, per=30.25%, avg=8192.00, stdev= 0.00, samples=1 00:33:17.302 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:17.302 lat (usec) : 250=76.34%, 500=23.66% 00:33:17.302 cpu : usr=2.60%, sys=4.00%, ctx=4322, majf=0, minf=1 00:33:17.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.302 issued rwts: total=2048,2272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:17.302 job2: (groupid=0, jobs=1): err= 0: pid=1419718: Tue Dec 10 05:58:04 2024 00:33:17.302 read: IOPS=300, BW=1202KiB/s (1231kB/s)(1220KiB/1015msec) 00:33:17.302 slat (nsec): min=6824, max=26720, avg=8727.62, stdev=4038.19 00:33:17.302 clat (usec): min=232, max=41521, avg=2929.85, stdev=10026.73 00:33:17.302 lat (usec): min=239, max=41530, avg=2938.57, stdev=10029.94 00:33:17.302 clat percentiles (usec): 00:33:17.302 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 253], 00:33:17.302 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:33:17.302 | 70.00th=[ 281], 80.00th=[ 318], 90.00th=[ 396], 95.00th=[41157], 00:33:17.302 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:33:17.302 | 99.99th=[41681] 00:33:17.302 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:33:17.302 slat (nsec): min=12586, max=46496, avg=13973.15, stdev=2160.57 00:33:17.302 clat (usec): min=179, max=386, avg=212.86, stdev=17.56 00:33:17.302 lat (usec): min=192, max=432, avg=226.84, stdev=18.31 00:33:17.302 clat percentiles (usec): 00:33:17.302 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:33:17.302 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:33:17.302 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 239], 00:33:17.302 | 99.00th=[ 260], 99.50th=[ 281], 99.90th=[ 388], 99.95th=[ 388], 00:33:17.302 | 99.99th=[ 388] 00:33:17.302 bw ( KiB/s): min= 4096, max= 4096, per=15.12%, avg=4096.00, stdev= 0.00, samples=1 00:33:17.302 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:17.302 lat (usec) : 250=65.85%, 500=31.58%, 750=0.12% 00:33:17.302 lat (msec) : 50=2.45% 00:33:17.302 cpu : usr=0.39%, sys=1.08%, ctx=818, majf=0, minf=2 00:33:17.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.302 issued rwts: total=305,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:17.302 job3: (groupid=0, jobs=1): err= 0: pid=1419723: Tue Dec 10 05:58:04 2024 00:33:17.302 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:33:17.302 slat (nsec): min=3187, max=40016, avg=7836.55, stdev=3603.66 00:33:17.302 clat (usec): min=220, max=536, avg=280.41, stdev=38.53 00:33:17.302 lat (usec): min=224, max=540, avg=288.25, stdev=39.46 00:33:17.302 clat percentiles (usec): 00:33:17.302 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:33:17.302 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 289], 00:33:17.302 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 355], 00:33:17.302 | 99.00th=[ 420], 99.50th=[ 465], 99.90th=[ 498], 99.95th=[ 506], 00:33:17.302 | 99.99th=[ 537] 00:33:17.302 write: IOPS=2064, BW=8260KiB/s (8458kB/s)(8268KiB/1001msec); 0 zone resets 00:33:17.302 slat (nsec): min=4006, max=43121, avg=9487.45, stdev=3059.87 00:33:17.302 clat (usec): min=144, max=3694, avg=184.23, stdev=81.26 00:33:17.302 lat (usec): min=149, max=3704, avg=193.71, stdev=81.53 00:33:17.302 clat percentiles (usec): 00:33:17.302 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:33:17.302 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:33:17.302 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 215], 00:33:17.302 | 99.00th=[ 253], 99.50th=[ 293], 99.90th=[ 537], 99.95th=[ 693], 00:33:17.302 | 99.99th=[ 3687] 00:33:17.302 bw ( KiB/s): min= 8192, max= 8192, per=30.25%, avg=8192.00, stdev= 0.00, samples=1 00:33:17.302 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:17.302 lat (usec) : 250=59.37%, 500=40.51%, 750=0.10% 00:33:17.302 lat (msec) : 4=0.02% 00:33:17.302 cpu : usr=1.60%, sys=4.00%, ctx=4115, majf=0, minf=2 00:33:17.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.302 issued rwts: total=2048,2067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:17.302 00:33:17.302 Run status group 0 (all jobs): 00:33:17.302 READ: bw=22.8MiB/s (23.9MB/s), 1202KiB/s-8184KiB/s (1231kB/s-8380kB/s), io=23.2MiB (24.4MB), run=1001-1019msec 00:33:17.302 WRITE: bw=26.4MiB/s (27.7MB/s), 2018KiB/s-9079KiB/s (2066kB/s-9297kB/s), io=26.9MiB (28.3MB), run=1001-1019msec 00:33:17.302 00:33:17.302 Disk stats (read/write): 00:33:17.302 nvme0n1: ios=1591/2048, merge=0/0, ticks=1409/312, in_queue=1721, util=98.20% 00:33:17.302 nvme0n2: ios=1642/2048, merge=0/0, ticks=1396/369, in_queue=1765, util=98.48% 00:33:17.302 nvme0n3: ios=325/512, merge=0/0, ticks=1718/103, in_queue=1821, util=98.54% 00:33:17.302 nvme0n4: ios=1560/2021, merge=0/0, ticks=574/353, in_queue=927, util=90.98% 00:33:17.302 05:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:17.302 [global] 00:33:17.302 thread=1 00:33:17.302 invalidate=1 00:33:17.302 rw=randwrite 00:33:17.302 time_based=1 00:33:17.302 runtime=1 00:33:17.302 ioengine=libaio 00:33:17.302 direct=1 00:33:17.302 bs=4096 00:33:17.302 iodepth=1 00:33:17.302 norandommap=0 00:33:17.302 numjobs=1 00:33:17.302 00:33:17.302 verify_dump=1 00:33:17.302 verify_backlog=512 00:33:17.302 verify_state_save=0 00:33:17.302 do_verify=1 00:33:17.302 verify=crc32c-intel 00:33:17.302 [job0] 00:33:17.302 filename=/dev/nvme0n1 00:33:17.302 [job1] 00:33:17.302 filename=/dev/nvme0n2 00:33:17.302 [job2] 00:33:17.302 filename=/dev/nvme0n3 00:33:17.302 [job3] 00:33:17.302 filename=/dev/nvme0n4 00:33:17.302 Could not set queue depth (nvme0n1) 00:33:17.302 Could not set queue depth (nvme0n2) 00:33:17.302 Could not set queue depth (nvme0n3) 00:33:17.302 Could not set queue depth (nvme0n4) 00:33:17.302 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:17.302 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:17.302 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:17.302 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:17.302 fio-3.35 00:33:17.302 Starting 4 threads 00:33:18.679 00:33:18.679 job0: (groupid=0, jobs=1): err= 0: pid=1420104: Tue Dec 10 05:58:06 2024 00:33:18.679 read: IOPS=2096, BW=8388KiB/s (8589kB/s)(8396KiB/1001msec) 00:33:18.679 slat (nsec): min=6784, max=21045, avg=7709.53, stdev=1047.16 00:33:18.679 clat (usec): min=179, max=471, avg=238.52, stdev=42.83 00:33:18.679 lat (usec): min=186, max=478, avg=246.23, stdev=42.86 00:33:18.679 clat percentiles (usec): 00:33:18.679 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 206], 00:33:18.679 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 231], 00:33:18.679 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 318], 00:33:18.679 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 465], 99.95th=[ 469], 00:33:18.679 | 99.99th=[ 474] 00:33:18.679 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:33:18.679 slat (nsec): min=8832, max=42740, avg=10659.65, stdev=1638.80 00:33:18.679 clat (usec): min=119, max=303, avg=172.83, stdev=28.87 00:33:18.679 lat (usec): min=142, max=341, avg=183.49, stdev=29.17 00:33:18.679 clat percentiles (usec): 00:33:18.679 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:33:18.679 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:33:18.679 | 70.00th=[ 176], 80.00th=[ 200], 90.00th=[ 223], 95.00th=[ 235], 00:33:18.679 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 281], 99.95th=[ 285], 00:33:18.679 | 99.99th=[ 306] 00:33:18.679 bw ( KiB/s): min= 8456, max= 8456, per=30.33%, avg=8456.00, stdev= 0.00, samples=1 00:33:18.679 iops : min= 2114, max= 2114, avg=2114.00, stdev= 0.00, samples=1 00:33:18.679 lat (usec) : 250=84.63%, 500=15.37% 00:33:18.679 cpu : usr=4.20%, sys=6.70%, ctx=4659, majf=0, minf=1 00:33:18.679 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:18.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.680 issued rwts: total=2099,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:18.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:18.680 job1: (groupid=0, jobs=1): err= 0: pid=1420105: Tue Dec 10 05:58:06 2024 00:33:18.680 read: IOPS=22, BW=91.9KiB/s (94.1kB/s)(92.0KiB/1001msec) 00:33:18.680 slat (nsec): min=10957, max=23685, avg=20655.30, stdev=4479.65 00:33:18.680 clat (usec): min=267, max=42070, avg=39237.01, stdev=8498.63 00:33:18.680 lat (usec): min=289, max=42092, avg=39257.66, stdev=8498.28 00:33:18.680 clat percentiles (usec): 00:33:18.680 | 1.00th=[ 269], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:18.680 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:18.680 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:18.680 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:18.680 | 99.99th=[42206] 00:33:18.680 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:33:18.680 slat (nsec): min=10372, max=38579, avg=11567.58, stdev=1611.34 00:33:18.680 clat (usec): min=156, max=221, avg=176.84, stdev=10.40 00:33:18.680 lat (usec): min=168, max=246, avg=188.41, stdev=10.68 00:33:18.680 clat percentiles (usec): 00:33:18.680 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:33:18.680 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:33:18.680 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 198], 00:33:18.680 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 223], 99.95th=[ 223], 00:33:18.680 | 99.99th=[ 223] 00:33:18.680 bw ( KiB/s): min= 4096, max= 4096, per=14.69%, avg=4096.00, stdev= 0.00, samples=1 00:33:18.680 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:18.680 lat (usec) : 250=95.70%, 500=0.19% 00:33:18.680 lat (msec) : 50=4.11% 00:33:18.680 cpu : usr=0.30%, sys=0.60%, ctx=535, majf=0, minf=1 00:33:18.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:18.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.680 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:18.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:18.680 job2: (groupid=0, jobs=1): err= 0: pid=1420106: Tue Dec 10 05:58:06 2024 00:33:18.680 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:33:18.680 slat (nsec): min=6496, max=39061, avg=8100.05, stdev=1585.14 00:33:18.680 clat (usec): min=169, max=519, avg=253.55, stdev=30.51 00:33:18.680 lat (usec): min=176, max=532, avg=261.65, stdev=30.63 00:33:18.680 clat percentiles (usec): 00:33:18.680 | 1.00th=[ 202], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:33:18.680 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:33:18.680 | 70.00th=[ 255], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 314], 00:33:18.680 | 99.00th=[ 347], 99.50th=[ 375], 99.90th=[ 441], 99.95th=[ 494], 00:33:18.680 | 99.99th=[ 519] 00:33:18.680 write: IOPS=2428, BW=9714KiB/s (9947kB/s)(9724KiB/1001msec); 0 zone resets 00:33:18.680 slat (nsec): min=8772, max=36695, avg=10557.58, stdev=1480.10 00:33:18.680 clat (usec): min=128, max=348, avg=175.45, stdev=26.70 00:33:18.680 lat (usec): min=138, max=384, avg=186.00, stdev=26.70 00:33:18.680 clat percentiles (usec): 00:33:18.680 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 159], 00:33:18.680 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:33:18.680 | 70.00th=[ 176], 80.00th=[ 192], 90.00th=[ 223], 95.00th=[ 233], 00:33:18.680 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 273], 99.95th=[ 285], 00:33:18.680 | 99.99th=[ 347] 00:33:18.680 bw ( KiB/s): min= 8928, max= 8928, per=32.03%, avg=8928.00, stdev= 0.00, samples=1 00:33:18.680 iops : min= 2232, max= 2232, avg=2232.00, stdev= 0.00, samples=1 00:33:18.680 lat (usec) : 250=82.34%, 500=17.64%, 750=0.02% 00:33:18.680 cpu : usr=2.80%, sys=6.00%, ctx=4479, majf=0, minf=1 00:33:18.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:18.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.680 issued rwts: total=2048,2431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:18.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:18.680 job3: (groupid=0, jobs=1): err= 0: pid=1420107: Tue Dec 10 05:58:06 2024 00:33:18.680 read: IOPS=1022, BW=4091KiB/s (4189kB/s)(4132KiB/1010msec) 00:33:18.680 slat (nsec): min=7155, max=26055, avg=8350.42, stdev=1391.57 00:33:18.680 clat (usec): min=199, max=42031, avg=667.64, stdev=4005.60 00:33:18.680 lat (usec): min=207, max=42040, avg=675.99, stdev=4006.16 00:33:18.680 clat percentiles (usec): 00:33:18.680 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 239], 00:33:18.680 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 277], 00:33:18.680 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 338], 00:33:18.680 | 99.00th=[ 461], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:33:18.680 | 99.99th=[42206] 00:33:18.680 write: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec); 0 zone resets 00:33:18.680 slat (nsec): min=10371, max=38051, avg=12027.35, stdev=1925.18 00:33:18.680 clat (usec): min=130, max=323, avg=185.68, stdev=31.77 00:33:18.680 lat (usec): min=141, max=361, avg=197.70, stdev=32.32 00:33:18.680 clat percentiles (usec): 00:33:18.680 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 163], 00:33:18.680 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 182], 00:33:18.680 | 70.00th=[ 202], 80.00th=[ 217], 90.00th=[ 231], 95.00th=[ 241], 00:33:18.680 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 318], 99.95th=[ 322], 00:33:18.680 | 99.99th=[ 322] 00:33:18.680 bw ( KiB/s): min= 4096, max= 8192, per=22.04%, avg=6144.00, stdev=2896.31, samples=2 00:33:18.680 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:33:18.680 lat (usec) : 250=68.51%, 500=31.10% 00:33:18.680 lat (msec) : 50=0.39% 00:33:18.680 cpu : usr=1.78%, sys=4.26%, ctx=2573, majf=0, minf=1 00:33:18.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:18.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.680 issued rwts: total=1033,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:18.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:18.680 00:33:18.680 Run status group 0 (all jobs): 00:33:18.680 READ: bw=20.1MiB/s (21.1MB/s), 91.9KiB/s-8388KiB/s (94.1kB/s-8589kB/s), io=20.3MiB (21.3MB), run=1001-1010msec 00:33:18.680 WRITE: bw=27.2MiB/s (28.5MB/s), 2046KiB/s-9.99MiB/s (2095kB/s-10.5MB/s), io=27.5MiB (28.8MB), run=1001-1010msec 00:33:18.680 00:33:18.680 Disk stats (read/write): 00:33:18.680 nvme0n1: ios=1880/2048, merge=0/0, ticks=452/350, in_queue=802, util=87.07% 00:33:18.680 nvme0n2: ios=32/512, merge=0/0, ticks=751/87, in_queue=838, util=87.01% 00:33:18.680 nvme0n3: ios=1724/2048, merge=0/0, ticks=414/353, in_queue=767, util=89.07% 00:33:18.680 nvme0n4: ios=1053/1536, merge=0/0, ticks=1489/263, in_queue=1752, util=98.53% 00:33:18.680 05:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:18.680 [global] 00:33:18.680 thread=1 00:33:18.680 invalidate=1 00:33:18.680 rw=write 00:33:18.680 time_based=1 00:33:18.680 runtime=1 00:33:18.680 ioengine=libaio 00:33:18.680 direct=1 00:33:18.680 bs=4096 00:33:18.680 iodepth=128 00:33:18.680 norandommap=0 00:33:18.680 numjobs=1 00:33:18.680 00:33:18.680 verify_dump=1 00:33:18.680 verify_backlog=512 00:33:18.680 verify_state_save=0 00:33:18.680 do_verify=1 00:33:18.680 verify=crc32c-intel 00:33:18.680 [job0] 00:33:18.680 filename=/dev/nvme0n1 00:33:18.680 [job1] 00:33:18.680 filename=/dev/nvme0n2 00:33:18.680 [job2] 00:33:18.680 filename=/dev/nvme0n3 00:33:18.680 [job3] 00:33:18.680 filename=/dev/nvme0n4 00:33:18.680 Could not set queue depth (nvme0n1) 00:33:18.680 Could not set queue depth (nvme0n2) 00:33:18.680 Could not set queue depth (nvme0n3) 00:33:18.680 Could not set queue depth (nvme0n4) 00:33:18.939 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:18.939 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:18.939 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:18.939 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:18.939 fio-3.35 00:33:18.939 Starting 4 threads 00:33:20.317 00:33:20.317 job0: (groupid=0, jobs=1): err= 0: pid=1420466: Tue Dec 10 05:58:07 2024 00:33:20.317 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:33:20.317 slat (nsec): min=1682, max=4924.2k, avg=80173.41, stdev=489619.69 00:33:20.317 clat (usec): min=6321, max=17895, avg=10395.06, stdev=1508.79 00:33:20.317 lat (usec): min=6328, max=17903, avg=10475.24, stdev=1539.34 00:33:20.317 clat percentiles (usec): 00:33:20.317 | 1.00th=[ 7308], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 9241], 00:33:20.317 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10159], 60.00th=[10552], 00:33:20.317 | 70.00th=[10945], 80.00th=[11731], 90.00th=[12649], 95.00th=[13042], 00:33:20.317 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15008], 99.95th=[15401], 00:33:20.317 | 99.99th=[17957] 00:33:20.317 write: IOPS=6200, BW=24.2MiB/s (25.4MB/s)(24.3MiB/1005msec); 0 zone resets 00:33:20.317 slat (usec): min=2, max=4555, avg=74.94, stdev=431.60 00:33:20.317 clat (usec): min=4259, max=15049, avg=10157.18, stdev=1072.60 00:33:20.317 lat (usec): min=4913, max=15081, avg=10232.12, stdev=1141.46 00:33:20.317 clat percentiles (usec): 00:33:20.317 | 1.00th=[ 6194], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[ 9765], 00:33:20.317 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:33:20.317 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10683], 95.00th=[12125], 00:33:20.317 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14746], 99.95th=[14877], 00:33:20.317 | 99.99th=[15008] 00:33:20.317 bw ( KiB/s): min=24576, max=24576, per=34.17%, avg=24576.00, stdev= 0.00, samples=2 00:33:20.317 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:33:20.317 lat (msec) : 10=38.76%, 20=61.24% 00:33:20.317 cpu : usr=5.08%, sys=8.57%, ctx=524, majf=0, minf=1 00:33:20.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:20.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:20.317 issued rwts: total=6144,6231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:20.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:20.317 job1: (groupid=0, jobs=1): err= 0: pid=1420467: Tue Dec 10 05:58:07 2024 00:33:20.317 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:33:20.317 slat (nsec): min=1164, max=9067.2k, avg=71439.60, stdev=605264.32 00:33:20.317 clat (usec): min=2687, max=49297, avg=10259.69, stdev=3160.46 00:33:20.317 lat (usec): min=2690, max=49303, avg=10331.13, stdev=3215.41 00:33:20.318 clat percentiles (usec): 00:33:20.318 | 1.00th=[ 3982], 5.00th=[ 6194], 10.00th=[ 7373], 20.00th=[ 8586], 00:33:20.318 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:33:20.318 | 70.00th=[10421], 80.00th=[11338], 90.00th=[14484], 95.00th=[16319], 00:33:20.318 | 99.00th=[18482], 99.50th=[19530], 99.90th=[49021], 99.95th=[49021], 00:33:20.318 | 99.99th=[49546] 00:33:20.318 write: IOPS=6495, BW=25.4MiB/s (26.6MB/s)(25.5MiB/1006msec); 0 zone resets 00:33:20.318 slat (nsec): min=1924, max=8292.2k, avg=61827.53, stdev=388813.56 00:33:20.318 clat (usec): min=2007, max=47705, avg=9855.56, stdev=3952.52 00:33:20.318 lat (usec): min=2016, max=47708, avg=9917.39, stdev=3964.05 00:33:20.318 clat percentiles (usec): 00:33:20.318 | 1.00th=[ 3294], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 7570], 00:33:20.318 | 30.00th=[ 8586], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10159], 00:33:20.318 | 70.00th=[10421], 80.00th=[10552], 90.00th=[12125], 95.00th=[13829], 00:33:20.318 | 99.00th=[31589], 99.50th=[32900], 99.90th=[46400], 99.95th=[47449], 00:33:20.318 | 99.99th=[47449] 00:33:20.318 bw ( KiB/s): min=25352, max=25904, per=35.64%, avg=25628.00, stdev=390.32, samples=2 00:33:20.318 iops : min= 6338, max= 6476, avg=6407.00, stdev=97.58, samples=2 00:33:20.318 lat (msec) : 4=1.57%, 10=52.20%, 20=45.03%, 50=1.20% 00:33:20.318 cpu : usr=4.08%, sys=7.46%, ctx=628, majf=0, minf=2 00:33:20.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:20.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:20.318 issued rwts: total=6144,6534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:20.318 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:20.318 job2: (groupid=0, jobs=1): err= 0: pid=1420468: Tue Dec 10 05:58:07 2024 00:33:20.318 read: IOPS=2023, BW=8095KiB/s (8289kB/s)(8192KiB/1012msec) 00:33:20.318 slat (usec): min=2, max=13419, avg=145.28, stdev=1077.89 00:33:20.318 clat (usec): min=6900, max=52128, avg=17031.94, stdev=6222.85 00:33:20.318 lat (usec): min=6909, max=52138, avg=17177.22, stdev=6314.53 00:33:20.318 clat percentiles (usec): 00:33:20.318 | 1.00th=[ 7570], 5.00th=[11338], 10.00th=[12649], 20.00th=[13829], 00:33:20.318 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14877], 60.00th=[15401], 00:33:20.318 | 70.00th=[16581], 80.00th=[19006], 90.00th=[23725], 95.00th=[29492], 00:33:20.318 | 99.00th=[46924], 99.50th=[47973], 99.90th=[52167], 99.95th=[52167], 00:33:20.318 | 99.99th=[52167] 00:33:20.318 write: IOPS=2463, BW=9854KiB/s (10.1MB/s)(9972KiB/1012msec); 0 zone resets 00:33:20.318 slat (usec): min=3, max=49034, avg=276.04, stdev=2424.86 00:33:20.318 clat (usec): min=1867, max=259194, avg=28924.26, stdev=22833.39 00:33:20.318 lat (msec): min=3, max=259, avg=29.20, stdev=23.30 00:33:20.318 clat percentiles (msec): 00:33:20.318 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:33:20.318 | 30.00th=[ 16], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 27], 00:33:20.318 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 46], 95.00th=[ 47], 00:33:20.318 | 99.00th=[ 140], 99.50th=[ 186], 99.90th=[ 259], 99.95th=[ 259], 00:33:20.318 | 99.99th=[ 259] 00:33:20.318 bw ( KiB/s): min= 8192, max=10728, per=13.15%, avg=9460.00, stdev=1793.22, samples=2 00:33:20.318 iops : min= 2048, max= 2682, avg=2365.00, stdev=448.31, samples=2 00:33:20.318 lat (msec) : 2=0.02%, 4=0.13%, 10=1.81%, 20=56.55%, 50=39.77% 00:33:20.318 lat (msec) : 100=1.01%, 250=0.64%, 500=0.07% 00:33:20.318 cpu : usr=2.47%, sys=2.87%, ctx=222, majf=0, minf=1 00:33:20.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:33:20.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:20.318 issued rwts: total=2048,2493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:20.318 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:20.318 job3: (groupid=0, jobs=1): err= 0: pid=1420469: Tue Dec 10 05:58:07 2024 00:33:20.318 read: IOPS=2948, BW=11.5MiB/s (12.1MB/s)(12.1MiB/1048msec) 00:33:20.318 slat (nsec): min=1639, max=15911k, avg=140538.88, stdev=923774.81 00:33:20.318 clat (usec): min=9057, max=55367, avg=18291.57, stdev=9654.25 00:33:20.318 lat (usec): min=9063, max=56789, avg=18432.11, stdev=9738.67 00:33:20.318 clat percentiles (usec): 00:33:20.318 | 1.00th=[ 9503], 5.00th=[11076], 10.00th=[11338], 20.00th=[11731], 00:33:20.318 | 30.00th=[12387], 40.00th=[14222], 50.00th=[15270], 60.00th=[15795], 00:33:20.318 | 70.00th=[16581], 80.00th=[22414], 90.00th=[35390], 95.00th=[43254], 00:33:20.318 | 99.00th=[52691], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:33:20.318 | 99.99th=[55313] 00:33:20.318 write: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1048msec); 0 zone resets 00:33:20.318 slat (usec): min=2, max=25409, avg=152.85, stdev=1042.44 00:33:20.318 clat (usec): min=8283, max=72635, avg=21188.78, stdev=13587.42 00:33:20.318 lat (usec): min=8289, max=72668, avg=21341.63, stdev=13690.78 00:33:20.318 clat percentiles (usec): 00:33:20.318 | 1.00th=[10421], 5.00th=[11207], 10.00th=[11469], 20.00th=[11731], 00:33:20.318 | 30.00th=[11994], 40.00th=[14484], 50.00th=[15270], 60.00th=[15795], 00:33:20.318 | 70.00th=[22414], 80.00th=[26608], 90.00th=[44303], 95.00th=[55313], 00:33:20.318 | 99.00th=[60031], 99.50th=[60556], 99.90th=[64750], 99.95th=[68682], 00:33:20.318 | 99.99th=[72877] 00:33:20.318 bw ( KiB/s): min=12136, max=15656, per=19.32%, avg=13896.00, stdev=2489.02, samples=2 00:33:20.318 iops : min= 3034, max= 3914, avg=3474.00, stdev=622.25, samples=2 00:33:20.318 lat (msec) : 10=1.63%, 20=69.60%, 50=24.32%, 100=4.45% 00:33:20.318 cpu : usr=3.06%, sys=4.78%, ctx=264, majf=0, minf=1 00:33:20.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:33:20.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:20.318 issued rwts: total=3090,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:20.318 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:20.318 00:33:20.318 Run status group 0 (all jobs): 00:33:20.318 READ: bw=65.0MiB/s (68.1MB/s), 8095KiB/s-23.9MiB/s (8289kB/s-25.0MB/s), io=68.1MiB (71.4MB), run=1005-1048msec 00:33:20.318 WRITE: bw=70.2MiB/s (73.6MB/s), 9854KiB/s-25.4MiB/s (10.1MB/s-26.6MB/s), io=73.6MiB (77.2MB), run=1005-1048msec 00:33:20.318 00:33:20.318 Disk stats (read/write): 00:33:20.318 nvme0n1: ios=5154/5350, merge=0/0, ticks=27054/24619, in_queue=51673, util=99.50% 00:33:20.318 nvme0n2: ios=5120/5603, merge=0/0, ticks=48358/50190, in_queue=98548, util=86.99% 00:33:20.318 nvme0n3: ios=1692/2048, merge=0/0, ticks=28778/41116, in_queue=69894, util=98.44% 00:33:20.318 nvme0n4: ios=2622/3072, merge=0/0, ticks=15503/19689, in_queue=35192, util=98.32% 00:33:20.318 05:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:20.318 [global] 00:33:20.318 thread=1 00:33:20.318 invalidate=1 00:33:20.318 rw=randwrite 00:33:20.318 time_based=1 00:33:20.318 runtime=1 00:33:20.318 ioengine=libaio 00:33:20.318 direct=1 00:33:20.318 bs=4096 00:33:20.318 iodepth=128 00:33:20.318 norandommap=0 00:33:20.318 numjobs=1 00:33:20.318 00:33:20.318 verify_dump=1 00:33:20.318 verify_backlog=512 00:33:20.318 verify_state_save=0 00:33:20.318 do_verify=1 00:33:20.318 verify=crc32c-intel 00:33:20.318 [job0] 00:33:20.318 filename=/dev/nvme0n1 00:33:20.318 [job1] 00:33:20.318 filename=/dev/nvme0n2 00:33:20.318 [job2] 00:33:20.318 filename=/dev/nvme0n3 00:33:20.318 [job3] 00:33:20.318 filename=/dev/nvme0n4 00:33:20.318 Could not set queue depth (nvme0n1) 00:33:20.318 Could not set queue depth (nvme0n2) 00:33:20.318 Could not set queue depth (nvme0n3) 00:33:20.318 Could not set queue depth (nvme0n4) 00:33:20.577 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:20.577 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:20.577 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:20.577 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:20.577 fio-3.35 00:33:20.577 Starting 4 threads 00:33:22.075 00:33:22.075 job0: (groupid=0, jobs=1): err= 0: pid=1420836: Tue Dec 10 05:58:09 2024 00:33:22.075 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:33:22.075 slat (nsec): min=1119, max=59984k, avg=266777.00, stdev=2438427.07 00:33:22.075 clat (msec): min=9, max=154, avg=33.91, stdev=34.02 00:33:22.075 lat (msec): min=9, max=154, avg=34.18, stdev=34.20 00:33:22.075 clat percentiles (msec): 00:33:22.075 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 14], 00:33:22.075 | 30.00th=[ 17], 40.00th=[ 19], 50.00th=[ 20], 60.00th=[ 22], 00:33:22.075 | 70.00th=[ 27], 80.00th=[ 49], 90.00th=[ 79], 95.00th=[ 129], 00:33:22.075 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:33:22.075 | 99.99th=[ 155] 00:33:22.075 write: IOPS=2426, BW=9707KiB/s (9940kB/s)(9756KiB/1005msec); 0 zone resets 00:33:22.075 slat (nsec): min=1975, max=11333k, avg=181475.19, stdev=784950.73 00:33:22.075 clat (usec): min=3997, max=56629, avg=23520.92, stdev=11733.39 00:33:22.075 lat (usec): min=4894, max=56634, avg=23702.40, stdev=11797.95 00:33:22.075 clat percentiles (usec): 00:33:22.075 | 1.00th=[ 6194], 5.00th=[12649], 10.00th=[15795], 20.00th=[16057], 00:33:22.075 | 30.00th=[16188], 40.00th=[16581], 50.00th=[19792], 60.00th=[21627], 00:33:22.075 | 70.00th=[22414], 80.00th=[30540], 90.00th=[44827], 95.00th=[51119], 00:33:22.075 | 99.00th=[56361], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:33:22.075 | 99.99th=[56886] 00:33:22.075 bw ( KiB/s): min= 8192, max=10304, per=12.74%, avg=9248.00, stdev=1493.41, samples=2 00:33:22.075 iops : min= 2048, max= 2576, avg=2312.00, stdev=373.35, samples=2 00:33:22.075 lat (msec) : 4=0.02%, 10=1.56%, 20=51.88%, 50=33.81%, 100=9.20% 00:33:22.075 lat (msec) : 250=3.52% 00:33:22.075 cpu : usr=0.80%, sys=2.69%, ctx=262, majf=0, minf=1 00:33:22.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:33:22.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:22.075 issued rwts: total=2048,2439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:22.075 job1: (groupid=0, jobs=1): err= 0: pid=1420837: Tue Dec 10 05:58:09 2024 00:33:22.075 read: IOPS=6262, BW=24.5MiB/s (25.7MB/s)(24.6MiB/1006msec) 00:33:22.075 slat (nsec): min=1288, max=9035.2k, avg=82407.53, stdev=667196.90 00:33:22.075 clat (usec): min=1821, max=18928, avg=10457.82, stdev=2579.11 00:33:22.075 lat (usec): min=3115, max=18952, avg=10540.23, stdev=2629.93 00:33:22.075 clat percentiles (usec): 00:33:22.075 | 1.00th=[ 5538], 5.00th=[ 7570], 10.00th=[ 8160], 20.00th=[ 8848], 00:33:22.075 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:33:22.075 | 70.00th=[10421], 80.00th=[12125], 90.00th=[14877], 95.00th=[16319], 00:33:22.075 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:33:22.075 | 99.99th=[19006] 00:33:22.075 write: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec); 0 zone resets 00:33:22.075 slat (nsec): min=1982, max=8103.0k, avg=66240.67, stdev=485693.91 00:33:22.075 clat (usec): min=1900, max=18382, avg=9266.58, stdev=1941.04 00:33:22.075 lat (usec): min=1910, max=18391, avg=9332.82, stdev=1976.55 00:33:22.075 clat percentiles (usec): 00:33:22.075 | 1.00th=[ 3654], 5.00th=[ 5800], 10.00th=[ 6194], 20.00th=[ 8455], 00:33:22.075 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:33:22.075 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10290], 95.00th=[13042], 00:33:22.075 | 99.00th=[13566], 99.50th=[14353], 99.90th=[18220], 99.95th=[18220], 00:33:22.075 | 99.99th=[18482] 00:33:22.075 bw ( KiB/s): min=26608, max=26640, per=36.68%, avg=26624.00, stdev=22.63, samples=2 00:33:22.075 iops : min= 6652, max= 6660, avg=6656.00, stdev= 5.66, samples=2 00:33:22.075 lat (msec) : 2=0.05%, 4=0.85%, 10=62.93%, 20=36.17% 00:33:22.075 cpu : usr=5.17%, sys=7.46%, ctx=494, majf=0, minf=1 00:33:22.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:22.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:22.075 issued rwts: total=6300,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:22.075 job2: (groupid=0, jobs=1): err= 0: pid=1420838: Tue Dec 10 05:58:09 2024 00:33:22.075 read: IOPS=4552, BW=17.8MiB/s (18.6MB/s)(17.9MiB/1009msec) 00:33:22.075 slat (nsec): min=1357, max=12022k, avg=103581.22, stdev=810401.19 00:33:22.075 clat (usec): min=5025, max=35196, avg=13296.42, stdev=3790.16 00:33:22.075 lat (usec): min=5031, max=35201, avg=13400.00, stdev=3848.32 00:33:22.075 clat percentiles (usec): 00:33:22.075 | 1.00th=[ 7177], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[10945], 00:33:22.075 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[13435], 00:33:22.075 | 70.00th=[14353], 80.00th=[15664], 90.00th=[17957], 95.00th=[20055], 00:33:22.075 | 99.00th=[25822], 99.50th=[30540], 99.90th=[35390], 99.95th=[35390], 00:33:22.075 | 99.99th=[35390] 00:33:22.075 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:33:22.075 slat (usec): min=2, max=10928, avg=108.53, stdev=586.21 00:33:22.075 clat (usec): min=1735, max=35186, avg=14499.19, stdev=7243.68 00:33:22.075 lat (usec): min=1772, max=35189, avg=14607.72, stdev=7295.65 00:33:22.075 clat percentiles (usec): 00:33:22.075 | 1.00th=[ 5211], 5.00th=[ 7373], 10.00th=[ 8225], 20.00th=[10814], 00:33:22.075 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:33:22.075 | 70.00th=[12387], 80.00th=[16319], 90.00th=[29230], 95.00th=[32113], 00:33:22.075 | 99.00th=[32375], 99.50th=[32375], 99.90th=[32375], 99.95th=[32375], 00:33:22.075 | 99.99th=[35390] 00:33:22.075 bw ( KiB/s): min=16384, max=20480, per=25.39%, avg=18432.00, stdev=2896.31, samples=2 00:33:22.075 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:33:22.075 lat (msec) : 2=0.01%, 4=0.15%, 10=11.93%, 20=76.50%, 50=11.40% 00:33:22.075 cpu : usr=3.67%, sys=4.86%, ctx=492, majf=0, minf=1 00:33:22.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:33:22.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:22.075 issued rwts: total=4593,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:22.075 job3: (groupid=0, jobs=1): err= 0: pid=1420839: Tue Dec 10 05:58:09 2024 00:33:22.075 read: IOPS=4326, BW=16.9MiB/s (17.7MB/s)(17.1MiB/1009msec) 00:33:22.075 slat (nsec): min=1157, max=14841k, avg=103556.02, stdev=710752.88 00:33:22.075 clat (usec): min=1213, max=38618, avg=13631.24, stdev=4464.91 00:33:22.075 lat (usec): min=6844, max=38628, avg=13734.79, stdev=4505.17 00:33:22.075 clat percentiles (usec): 00:33:22.075 | 1.00th=[ 8225], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10683], 00:33:22.075 | 30.00th=[11076], 40.00th=[11338], 50.00th=[12649], 60.00th=[13566], 00:33:22.075 | 70.00th=[14746], 80.00th=[15270], 90.00th=[16909], 95.00th=[24249], 00:33:22.075 | 99.00th=[30016], 99.50th=[32900], 99.90th=[38536], 99.95th=[38536], 00:33:22.075 | 99.99th=[38536] 00:33:22.075 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:33:22.075 slat (usec): min=2, max=6584, avg=109.32, stdev=559.35 00:33:22.075 clat (usec): min=6373, max=51600, avg=14788.41, stdev=7317.61 00:33:22.075 lat (usec): min=6386, max=51620, avg=14897.73, stdev=7374.90 00:33:22.075 clat percentiles (usec): 00:33:22.075 | 1.00th=[ 8291], 5.00th=[10028], 10.00th=[10814], 20.00th=[11076], 00:33:22.075 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:33:22.075 | 70.00th=[14091], 80.00th=[16909], 90.00th=[22152], 95.00th=[31851], 00:33:22.075 | 99.00th=[45876], 99.50th=[48497], 99.90th=[51643], 99.95th=[51643], 00:33:22.075 | 99.99th=[51643] 00:33:22.076 bw ( KiB/s): min=13776, max=23088, per=25.39%, avg=18432.00, stdev=6584.58, samples=2 00:33:22.076 iops : min= 3444, max= 5772, avg=4608.00, stdev=1646.14, samples=2 00:33:22.076 lat (msec) : 2=0.01%, 10=6.47%, 20=80.55%, 50=12.88%, 100=0.08% 00:33:22.076 cpu : usr=4.46%, sys=4.66%, ctx=415, majf=0, minf=1 00:33:22.076 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:33:22.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:22.076 issued rwts: total=4365,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:22.076 00:33:22.076 Run status group 0 (all jobs): 00:33:22.076 READ: bw=67.0MiB/s (70.3MB/s), 8151KiB/s-24.5MiB/s (8347kB/s-25.7MB/s), io=67.6MiB (70.9MB), run=1005-1009msec 00:33:22.076 WRITE: bw=70.9MiB/s (74.3MB/s), 9707KiB/s-25.8MiB/s (9940kB/s-27.1MB/s), io=71.5MiB (75.0MB), run=1005-1009msec 00:33:22.076 00:33:22.076 Disk stats (read/write): 00:33:22.076 nvme0n1: ios=1586/1895, merge=0/0, ticks=17834/18398, in_queue=36232, util=86.77% 00:33:22.076 nvme0n2: ios=5334/5632, merge=0/0, ticks=53591/50423, in_queue=104014, util=87.22% 00:33:22.076 nvme0n3: ios=3584/3887, merge=0/0, ticks=46904/57885, in_queue=104789, util=89.07% 00:33:22.076 nvme0n4: ios=4087/4096, merge=0/0, ticks=26719/25047, in_queue=51766, util=89.73% 00:33:22.076 05:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:22.076 05:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1421060 00:33:22.076 05:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:22.076 05:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:22.076 [global] 00:33:22.076 thread=1 00:33:22.076 invalidate=1 00:33:22.076 rw=read 00:33:22.076 time_based=1 00:33:22.076 runtime=10 00:33:22.076 ioengine=libaio 00:33:22.076 direct=1 00:33:22.076 bs=4096 00:33:22.076 iodepth=1 00:33:22.076 norandommap=1 00:33:22.076 numjobs=1 00:33:22.076 00:33:22.076 [job0] 00:33:22.076 filename=/dev/nvme0n1 00:33:22.076 [job1] 00:33:22.076 filename=/dev/nvme0n2 00:33:22.076 [job2] 00:33:22.076 filename=/dev/nvme0n3 00:33:22.076 [job3] 00:33:22.076 filename=/dev/nvme0n4 00:33:22.076 Could not set queue depth (nvme0n1) 00:33:22.076 Could not set queue depth (nvme0n2) 00:33:22.076 Could not set queue depth (nvme0n3) 00:33:22.076 Could not set queue depth (nvme0n4) 00:33:22.076 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:22.076 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:22.076 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:22.076 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:22.076 fio-3.35 00:33:22.076 Starting 4 threads 00:33:25.364 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:25.364 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=26099712, buflen=4096 00:33:25.364 fio: pid=1421206, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:25.364 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:25.364 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:25.364 05:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:25.364 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:33:25.364 fio: pid=1421205, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:25.364 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:25.364 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:25.364 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=503808, buflen=4096 00:33:25.364 fio: pid=1421200, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:25.623 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:25.623 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:25.623 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=28717056, buflen=4096 00:33:25.623 fio: pid=1421204, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:25.623 00:33:25.623 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1421200: Tue Dec 10 05:58:13 2024 00:33:25.623 read: IOPS=39, BW=157KiB/s (161kB/s)(492KiB/3137msec) 00:33:25.623 slat (usec): min=7, max=18531, avg=254.83, stdev=1921.14 00:33:25.623 clat (usec): min=194, max=41985, avg=25072.53, stdev=19955.31 00:33:25.623 lat (usec): min=203, max=59680, avg=25329.23, stdev=20239.22 00:33:25.623 clat percentiles (usec): 00:33:25.623 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 233], 00:33:25.623 | 30.00th=[ 237], 40.00th=[40633], 50.00th=[40633], 60.00th=[40633], 00:33:25.623 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:25.623 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:25.623 | 99.99th=[42206] 00:33:25.623 bw ( KiB/s): min= 126, max= 200, per=0.97%, avg=158.17, stdev=28.17, samples=6 00:33:25.623 iops : min= 31, max= 50, avg=39.33, stdev= 7.03, samples=6 00:33:25.623 lat (usec) : 250=37.10%, 500=1.61% 00:33:25.623 lat (msec) : 50=60.48% 00:33:25.623 cpu : usr=0.16%, sys=0.00%, ctx=127, majf=0, minf=1 00:33:25.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.624 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.624 issued rwts: total=124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:25.624 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1421204: Tue Dec 10 05:58:13 2024 00:33:25.624 read: IOPS=2104, BW=8417KiB/s (8619kB/s)(27.4MiB/3332msec) 00:33:25.624 slat (usec): min=6, max=10831, avg=10.52, stdev=156.73 00:33:25.624 clat (usec): min=186, max=42302, avg=459.76, stdev=2881.37 00:33:25.624 lat (usec): min=193, max=52027, avg=470.28, stdev=2922.40 00:33:25.624 clat percentiles (usec): 00:33:25.624 | 1.00th=[ 208], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 241], 00:33:25.624 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:33:25.624 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 289], 00:33:25.624 | 99.00th=[ 482], 99.50th=[ 1713], 99.90th=[41157], 99.95th=[41681], 00:33:25.624 | 99.99th=[42206] 00:33:25.624 bw ( KiB/s): min= 266, max=15488, per=57.20%, avg=9324.33, stdev=7103.91, samples=6 00:33:25.624 iops : min= 66, max= 3872, avg=2331.00, stdev=1776.11, samples=6 00:33:25.624 lat (usec) : 250=54.86%, 500=44.47%, 750=0.10%, 1000=0.01% 00:33:25.624 lat (msec) : 2=0.04%, 50=0.50% 00:33:25.624 cpu : usr=1.02%, sys=3.36%, ctx=7016, majf=0, minf=2 00:33:25.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.624 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.624 issued rwts: total=7012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:25.624 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1421205: Tue Dec 10 05:58:13 2024 00:33:25.624 read: IOPS=25, BW=99.5KiB/s (102kB/s)(292KiB/2934msec) 00:33:25.624 slat (nsec): min=10354, max=36400, avg=22342.80, stdev=2665.15 00:33:25.624 clat (usec): min=378, max=42076, avg=39877.31, stdev=6662.21 00:33:25.624 lat (usec): min=400, max=42105, avg=39899.65, stdev=6661.02 00:33:25.624 clat percentiles (usec): 00:33:25.624 | 1.00th=[ 379], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:25.624 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:25.624 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:25.624 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:25.624 | 99.99th=[42206] 00:33:25.624 bw ( KiB/s): min= 96, max= 104, per=0.61%, avg=100.60, stdev= 4.22, samples=5 00:33:25.624 iops : min= 24, max= 26, avg=25.00, stdev= 1.00, samples=5 00:33:25.624 lat (usec) : 500=1.35%, 750=1.35% 00:33:25.624 lat (msec) : 50=95.95% 00:33:25.624 cpu : usr=0.14%, sys=0.00%, ctx=74, majf=0, minf=2 00:33:25.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.624 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.624 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:25.624 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1421206: Tue Dec 10 05:58:13 2024 00:33:25.624 read: IOPS=2352, BW=9409KiB/s (9634kB/s)(24.9MiB/2709msec) 00:33:25.624 slat (nsec): min=6754, max=41960, avg=8531.08, stdev=1642.74 00:33:25.624 clat (usec): min=194, max=41012, avg=411.46, stdev=2508.98 00:33:25.624 lat (usec): min=204, max=41026, avg=419.99, stdev=2509.74 00:33:25.624 clat percentiles (usec): 00:33:25.624 | 1.00th=[ 217], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 243], 00:33:25.624 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 247], 60.00th=[ 249], 00:33:25.624 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 289], 00:33:25.624 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[41157], 99.95th=[41157], 00:33:25.624 | 99.99th=[41157] 00:33:25.624 bw ( KiB/s): min= 104, max=15520, per=55.52%, avg=9050.60, stdev=6579.84, samples=5 00:33:25.624 iops : min= 26, max= 3880, avg=2262.60, stdev=1644.92, samples=5 00:33:25.624 lat (usec) : 250=61.93%, 500=37.58%, 750=0.06% 00:33:25.624 lat (msec) : 4=0.02%, 50=0.39% 00:33:25.624 cpu : usr=0.96%, sys=3.36%, ctx=6375, majf=0, minf=2 00:33:25.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.624 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.624 issued rwts: total=6373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:25.624 00:33:25.624 Run status group 0 (all jobs): 00:33:25.624 READ: bw=15.9MiB/s (16.7MB/s), 99.5KiB/s-9409KiB/s (102kB/s-9634kB/s), io=53.0MiB (55.6MB), run=2709-3332msec 00:33:25.624 00:33:25.624 Disk stats (read/write): 00:33:25.624 nvme0n1: ios=122/0, merge=0/0, ticks=3045/0, in_queue=3045, util=94.82% 00:33:25.624 nvme0n2: ios=7000/0, merge=0/0, ticks=2931/0, in_queue=2931, util=95.57% 00:33:25.624 nvme0n3: ios=71/0, merge=0/0, ticks=2831/0, in_queue=2831, util=96.52% 00:33:25.624 nvme0n4: ios=6086/0, merge=0/0, ticks=3160/0, in_queue=3160, util=99.15% 00:33:25.883 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:25.883 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:26.142 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:26.142 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:26.142 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:26.142 05:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:26.401 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:26.401 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1421060 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:26.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:26.660 nvmf hotplug test: fio failed as expected 00:33:26.660 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:26.919 rmmod nvme_tcp 00:33:26.919 rmmod nvme_fabrics 00:33:26.919 rmmod nvme_keyring 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1418429 ']' 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1418429 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1418429 ']' 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1418429 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.919 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1418429 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1418429' 00:33:27.179 killing process with pid 1418429 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1418429 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1418429 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:27.179 05:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:27.179 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:27.179 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:27.179 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.179 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.179 05:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:29.713 00:33:29.713 real 0m25.762s 00:33:29.713 user 1m31.658s 00:33:29.713 sys 0m10.952s 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.713 ************************************ 00:33:29.713 END TEST nvmf_fio_target 00:33:29.713 ************************************ 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:29.713 ************************************ 00:33:29.713 START TEST nvmf_bdevio 00:33:29.713 ************************************ 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:29.713 * Looking for test storage... 00:33:29.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:29.713 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:29.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.714 --rc genhtml_branch_coverage=1 00:33:29.714 --rc genhtml_function_coverage=1 00:33:29.714 --rc genhtml_legend=1 00:33:29.714 --rc geninfo_all_blocks=1 00:33:29.714 --rc geninfo_unexecuted_blocks=1 00:33:29.714 00:33:29.714 ' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:29.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.714 --rc genhtml_branch_coverage=1 00:33:29.714 --rc genhtml_function_coverage=1 00:33:29.714 --rc genhtml_legend=1 00:33:29.714 --rc geninfo_all_blocks=1 00:33:29.714 --rc geninfo_unexecuted_blocks=1 00:33:29.714 00:33:29.714 ' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:29.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.714 --rc genhtml_branch_coverage=1 00:33:29.714 --rc genhtml_function_coverage=1 00:33:29.714 --rc genhtml_legend=1 00:33:29.714 --rc geninfo_all_blocks=1 00:33:29.714 --rc geninfo_unexecuted_blocks=1 00:33:29.714 00:33:29.714 ' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:29.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.714 --rc genhtml_branch_coverage=1 00:33:29.714 --rc genhtml_function_coverage=1 00:33:29.714 --rc genhtml_legend=1 00:33:29.714 --rc geninfo_all_blocks=1 00:33:29.714 --rc geninfo_unexecuted_blocks=1 00:33:29.714 00:33:29.714 ' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:29.714 05:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:36.285 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:36.285 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:36.285 Found net devices under 0000:af:00.0: cvl_0_0 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:36.285 Found net devices under 0000:af:00.1: cvl_0_1 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.285 05:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.285 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.285 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:36.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:33:36.286 00:33:36.286 --- 10.0.0.2 ping statistics --- 00:33:36.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.286 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:33:36.286 00:33:36.286 --- 10.0.0.1 ping statistics --- 00:33:36.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.286 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1425362 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1425362 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1425362 ']' 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:36.286 [2024-12-10 05:58:23.251849] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:36.286 [2024-12-10 05:58:23.252821] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:33:36.286 [2024-12-10 05:58:23.252865] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.286 [2024-12-10 05:58:23.330257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:36.286 [2024-12-10 05:58:23.373322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.286 [2024-12-10 05:58:23.373355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.286 [2024-12-10 05:58:23.373363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.286 [2024-12-10 05:58:23.373368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.286 [2024-12-10 05:58:23.373374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.286 [2024-12-10 05:58:23.374836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:36.286 [2024-12-10 05:58:23.374944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:36.286 [2024-12-10 05:58:23.375057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:36.286 [2024-12-10 05:58:23.375059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:36.286 [2024-12-10 05:58:23.442875] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:36.286 [2024-12-10 05:58:23.443825] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:36.286 [2024-12-10 05:58:23.443892] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:36.286 [2024-12-10 05:58:23.444371] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:36.286 [2024-12-10 05:58:23.444413] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:36.286 [2024-12-10 05:58:23.511732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:36.286 Malloc0 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:36.286 [2024-12-10 05:58:23.587896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.286 { 00:33:36.286 "params": { 00:33:36.286 "name": "Nvme$subsystem", 00:33:36.286 "trtype": "$TEST_TRANSPORT", 00:33:36.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.286 "adrfam": "ipv4", 00:33:36.286 "trsvcid": "$NVMF_PORT", 00:33:36.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.286 "hdgst": ${hdgst:-false}, 00:33:36.286 "ddgst": ${ddgst:-false} 00:33:36.286 }, 00:33:36.286 "method": "bdev_nvme_attach_controller" 00:33:36.286 } 00:33:36.286 EOF 00:33:36.286 )") 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:36.286 05:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:36.286 "params": { 00:33:36.286 "name": "Nvme1", 00:33:36.286 "trtype": "tcp", 00:33:36.286 "traddr": "10.0.0.2", 00:33:36.287 "adrfam": "ipv4", 00:33:36.287 "trsvcid": "4420", 00:33:36.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:36.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:36.287 "hdgst": false, 00:33:36.287 "ddgst": false 00:33:36.287 }, 00:33:36.287 "method": "bdev_nvme_attach_controller" 00:33:36.287 }' 00:33:36.287 [2024-12-10 05:58:23.638654] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:33:36.287 [2024-12-10 05:58:23.638698] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425516 ] 00:33:36.287 [2024-12-10 05:58:23.713993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:36.287 [2024-12-10 05:58:23.756292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.287 [2024-12-10 05:58:23.756401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.287 [2024-12-10 05:58:23.756401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:36.287 I/O targets: 00:33:36.287 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:36.287 00:33:36.287 00:33:36.287 CUnit - A unit testing framework for C - Version 2.1-3 00:33:36.287 http://cunit.sourceforge.net/ 00:33:36.287 00:33:36.287 00:33:36.287 Suite: bdevio tests on: Nvme1n1 00:33:36.287 Test: blockdev write read block ...passed 00:33:36.287 Test: blockdev write zeroes read block ...passed 00:33:36.287 Test: blockdev write zeroes read no split ...passed 00:33:36.287 Test: blockdev write zeroes read split ...passed 00:33:36.287 Test: blockdev write zeroes read split partial ...passed 00:33:36.287 Test: blockdev reset ...[2024-12-10 05:58:24.174353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:36.287 [2024-12-10 05:58:24.174410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e80610 (9): Bad file descriptor 00:33:36.546 [2024-12-10 05:58:24.268145] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:36.546 passed 00:33:36.546 Test: blockdev write read 8 blocks ...passed 00:33:36.546 Test: blockdev write read size > 128k ...passed 00:33:36.546 Test: blockdev write read invalid size ...passed 00:33:36.546 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:36.546 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:36.546 Test: blockdev write read max offset ...passed 00:33:36.546 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:36.805 Test: blockdev writev readv 8 blocks ...passed 00:33:36.805 Test: blockdev writev readv 30 x 1block ...passed 00:33:36.805 Test: blockdev writev readv block ...passed 00:33:36.805 Test: blockdev writev readv size > 128k ...passed 00:33:36.805 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:36.805 Test: blockdev comparev and writev ...[2024-12-10 05:58:24.519042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:36.805 [2024-12-10 05:58:24.519070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.805 [2024-12-10 05:58:24.519084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:36.805 [2024-12-10 05:58:24.519092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:36.805 [2024-12-10 05:58:24.519389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:36.805 [2024-12-10 05:58:24.519405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:36.805 [2024-12-10 05:58:24.519417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:36.805 [2024-12-10 05:58:24.519424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:36.805 [2024-12-10 05:58:24.519699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:36.805 [2024-12-10 05:58:24.519709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:36.805 [2024-12-10 05:58:24.519720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:36.805 [2024-12-10 05:58:24.519728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:36.805 [2024-12-10 05:58:24.520020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:36.805 [2024-12-10 05:58:24.520031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:36.805 [2024-12-10 05:58:24.520043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:36.805 [2024-12-10 05:58:24.520051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:36.805 passed 00:33:36.805 Test: blockdev nvme passthru rw ...passed 00:33:36.805 Test: blockdev nvme passthru vendor specific ...[2024-12-10 05:58:24.602550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:36.805 [2024-12-10 05:58:24.602565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:36.805 [2024-12-10 05:58:24.602673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:36.805 [2024-12-10 05:58:24.602683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:36.805 [2024-12-10 05:58:24.602791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:36.805 [2024-12-10 05:58:24.602801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:36.805 [2024-12-10 05:58:24.602903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:36.805 [2024-12-10 05:58:24.602912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:36.805 passed 00:33:36.805 Test: blockdev nvme admin passthru ...passed 00:33:36.805 Test: blockdev copy ...passed 00:33:36.805 00:33:36.805 Run Summary: Type Total Ran Passed Failed Inactive 00:33:36.805 suites 1 1 n/a 0 0 00:33:36.805 tests 23 23 23 0 0 00:33:36.805 asserts 152 152 152 0 n/a 00:33:36.805 00:33:36.805 Elapsed time = 1.188 seconds 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:37.064 rmmod nvme_tcp 00:33:37.064 rmmod nvme_fabrics 00:33:37.064 rmmod nvme_keyring 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1425362 ']' 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1425362 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1425362 ']' 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1425362 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1425362 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1425362' 00:33:37.064 killing process with pid 1425362 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1425362 00:33:37.064 05:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1425362 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.326 05:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.862 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:39.862 00:33:39.862 real 0m10.068s 00:33:39.862 user 0m9.732s 00:33:39.862 sys 0m5.219s 00:33:39.862 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.862 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:39.862 ************************************ 00:33:39.862 END TEST nvmf_bdevio 00:33:39.862 ************************************ 00:33:39.862 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:39.862 00:33:39.862 real 4m31.458s 00:33:39.862 user 9m12.445s 00:33:39.862 sys 1m51.875s 00:33:39.862 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.862 05:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:39.862 ************************************ 00:33:39.862 END TEST nvmf_target_core_interrupt_mode 00:33:39.862 ************************************ 00:33:39.862 05:58:27 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:39.862 05:58:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:39.862 05:58:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.862 05:58:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.862 ************************************ 00:33:39.862 START TEST nvmf_interrupt 00:33:39.862 ************************************ 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:39.862 * Looking for test storage... 00:33:39.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.862 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:39.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.863 --rc genhtml_branch_coverage=1 00:33:39.863 --rc genhtml_function_coverage=1 00:33:39.863 --rc genhtml_legend=1 00:33:39.863 --rc geninfo_all_blocks=1 00:33:39.863 --rc geninfo_unexecuted_blocks=1 00:33:39.863 00:33:39.863 ' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:39.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.863 --rc genhtml_branch_coverage=1 00:33:39.863 --rc genhtml_function_coverage=1 00:33:39.863 --rc genhtml_legend=1 00:33:39.863 --rc geninfo_all_blocks=1 00:33:39.863 --rc geninfo_unexecuted_blocks=1 00:33:39.863 00:33:39.863 ' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:39.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.863 --rc genhtml_branch_coverage=1 00:33:39.863 --rc genhtml_function_coverage=1 00:33:39.863 --rc genhtml_legend=1 00:33:39.863 --rc geninfo_all_blocks=1 00:33:39.863 --rc geninfo_unexecuted_blocks=1 00:33:39.863 00:33:39.863 ' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:39.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.863 --rc genhtml_branch_coverage=1 00:33:39.863 --rc genhtml_function_coverage=1 00:33:39.863 --rc genhtml_legend=1 00:33:39.863 --rc geninfo_all_blocks=1 00:33:39.863 --rc geninfo_unexecuted_blocks=1 00:33:39.863 00:33:39.863 ' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:39.863 05:58:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:46.434 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:46.434 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:46.434 Found net devices under 0000:af:00.0: cvl_0_0 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:46.434 Found net devices under 0000:af:00.1: cvl_0_1 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.434 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:33:46.435 00:33:46.435 --- 10.0.0.2 ping statistics --- 00:33:46.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.435 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:33:46.435 00:33:46.435 --- 10.0.0.1 ping statistics --- 00:33:46.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.435 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1429105 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1429105 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1429105 ']' 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:46.435 [2024-12-10 05:58:33.476194] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:46.435 [2024-12-10 05:58:33.477109] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:33:46.435 [2024-12-10 05:58:33.477144] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.435 [2024-12-10 05:58:33.552330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:46.435 [2024-12-10 05:58:33.603442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.435 [2024-12-10 05:58:33.603487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.435 [2024-12-10 05:58:33.603504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.435 [2024-12-10 05:58:33.603514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.435 [2024-12-10 05:58:33.603522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.435 [2024-12-10 05:58:33.604980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.435 [2024-12-10 05:58:33.604985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.435 [2024-12-10 05:58:33.688244] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:46.435 [2024-12-10 05:58:33.688832] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:46.435 [2024-12-10 05:58:33.689028] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:46.435 5000+0 records in 00:33:46.435 5000+0 records out 00:33:46.435 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0172 s, 595 MB/s 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:46.435 AIO0 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:46.435 [2024-12-10 05:58:33.821879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:46.435 [2024-12-10 05:58:33.862193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1429105 0 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1429105 0 idle 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1429105 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1429105 -w 256 00:33:46.435 05:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1429105 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.27 reactor_0' 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1429105 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.27 reactor_0 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1429105 1 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1429105 1 idle 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1429105 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:46.435 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1429105 -w 256 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1429153 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1429153 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1429343 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1429105 0 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1429105 0 busy 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1429105 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1429105 -w 256 00:33:46.436 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1429105 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.45 reactor_0' 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1429105 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.45 reactor_0 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1429105 1 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1429105 1 busy 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1429105 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1429105 -w 256 00:33:46.695 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:46.954 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1429153 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:00.27 reactor_1' 00:33:46.954 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1429153 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:00.27 reactor_1 00:33:46.954 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:46.954 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:46.954 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:33:46.954 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:33:46.954 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:46.954 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:46.954 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:46.954 05:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:46.954 05:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1429343 00:33:56.932 Initializing NVMe Controllers 00:33:56.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:56.932 Controller IO queue size 256, less than required. 00:33:56.932 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:56.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:56.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:56.932 Initialization complete. Launching workers. 00:33:56.932 ======================================================== 00:33:56.932 Latency(us) 00:33:56.932 Device Information : IOPS MiB/s Average min max 00:33:56.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16449.30 64.26 15568.85 3555.62 31897.44 00:33:56.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16601.90 64.85 15423.31 8330.10 29855.93 00:33:56.932 ======================================================== 00:33:56.932 Total : 33051.20 129.11 15495.75 3555.62 31897.44 00:33:56.932 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1429105 0 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1429105 0 idle 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1429105 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1429105 -w 256 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1429105 root 20 0 128.2g 47616 34560 S 6.7 0.0 0:20.26 reactor_0' 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1429105 root 20 0 128.2g 47616 34560 S 6.7 0.0 0:20.26 reactor_0 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1429105 1 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1429105 1 idle 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1429105 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1429105 -w 256 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1429153 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1429153 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:56.932 05:58:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:57.500 05:58:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:57.500 05:58:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:57.500 05:58:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:57.500 05:58:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:57.500 05:58:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1429105 0 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1429105 0 idle 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1429105 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1429105 -w 256 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1429105 root 20 0 128.2g 73728 34560 S 6.7 0.1 0:20.50 reactor_0' 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1429105 root 20 0 128.2g 73728 34560 S 6.7 0.1 0:20.50 reactor_0 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:59.407 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1429105 1 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1429105 1 idle 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1429105 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1429105 -w 256 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1429153 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.09 reactor_1' 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1429153 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.09 reactor_1 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:59.667 05:58:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:59.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:59.926 05:58:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:59.926 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:59.926 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:59.926 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:59.926 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:59.926 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:59.926 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:59.927 rmmod nvme_tcp 00:33:59.927 rmmod nvme_fabrics 00:33:59.927 rmmod nvme_keyring 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1429105 ']' 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1429105 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1429105 ']' 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1429105 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1429105 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1429105' 00:33:59.927 killing process with pid 1429105 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1429105 00:33:59.927 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1429105 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:00.186 05:58:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.723 05:58:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:02.723 00:34:02.723 real 0m22.737s 00:34:02.723 user 0m39.776s 00:34:02.723 sys 0m8.320s 00:34:02.723 05:58:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.723 05:58:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:02.723 ************************************ 00:34:02.723 END TEST nvmf_interrupt 00:34:02.723 ************************************ 00:34:02.723 00:34:02.723 real 27m15.873s 00:34:02.723 user 56m9.741s 00:34:02.723 sys 9m19.519s 00:34:02.723 05:58:50 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.723 05:58:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.723 ************************************ 00:34:02.723 END TEST nvmf_tcp 00:34:02.723 ************************************ 00:34:02.723 05:58:50 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:34:02.723 05:58:50 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:02.723 05:58:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:02.723 05:58:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:02.723 05:58:50 -- common/autotest_common.sh@10 -- # set +x 00:34:02.723 ************************************ 00:34:02.723 START TEST spdkcli_nvmf_tcp 00:34:02.723 ************************************ 00:34:02.723 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:02.723 * Looking for test storage... 00:34:02.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:02.723 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:02.723 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:02.723 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:34:02.723 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:02.723 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:02.723 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:02.723 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:02.723 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:02.723 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:02.723 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:02.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.724 --rc genhtml_branch_coverage=1 00:34:02.724 --rc genhtml_function_coverage=1 00:34:02.724 --rc genhtml_legend=1 00:34:02.724 --rc geninfo_all_blocks=1 00:34:02.724 --rc geninfo_unexecuted_blocks=1 00:34:02.724 00:34:02.724 ' 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:02.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.724 --rc genhtml_branch_coverage=1 00:34:02.724 --rc genhtml_function_coverage=1 00:34:02.724 --rc genhtml_legend=1 00:34:02.724 --rc geninfo_all_blocks=1 00:34:02.724 --rc geninfo_unexecuted_blocks=1 00:34:02.724 00:34:02.724 ' 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:02.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.724 --rc genhtml_branch_coverage=1 00:34:02.724 --rc genhtml_function_coverage=1 00:34:02.724 --rc genhtml_legend=1 00:34:02.724 --rc geninfo_all_blocks=1 00:34:02.724 --rc geninfo_unexecuted_blocks=1 00:34:02.724 00:34:02.724 ' 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:02.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.724 --rc genhtml_branch_coverage=1 00:34:02.724 --rc genhtml_function_coverage=1 00:34:02.724 --rc genhtml_legend=1 00:34:02.724 --rc geninfo_all_blocks=1 00:34:02.724 --rc geninfo_unexecuted_blocks=1 00:34:02.724 00:34:02.724 ' 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:02.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1431962 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1431962 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1431962 ']' 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:02.724 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.724 [2024-12-10 05:58:50.416510] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:34:02.724 [2024-12-10 05:58:50.416556] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431962 ] 00:34:02.724 [2024-12-10 05:58:50.488799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:02.724 [2024-12-10 05:58:50.531304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.724 [2024-12-10 05:58:50.531305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.983 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.983 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:34:02.983 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:02.983 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:02.983 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.983 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:02.983 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:02.983 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:02.983 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:02.983 05:58:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.983 05:58:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:02.983 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:02.983 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:02.983 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:02.983 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:02.983 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:02.983 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:02.983 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:02.984 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:02.984 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:02.984 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:02.984 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:02.984 ' 00:34:05.516 [2024-12-10 05:58:53.378533] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.892 [2024-12-10 05:58:54.710932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:09.423 [2024-12-10 05:58:57.198607] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:11.954 [2024-12-10 05:58:59.345523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:13.331 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:13.331 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:13.331 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:13.331 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:13.331 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:13.331 05:59:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:13.331 05:59:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:13.331 05:59:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.331 05:59:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:13.331 05:59:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:13.331 05:59:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.331 05:59:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:13.332 05:59:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:13.899 05:59:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:13.899 05:59:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:13.899 05:59:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:13.899 05:59:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:13.899 05:59:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.899 05:59:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:13.899 05:59:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:13.899 05:59:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.899 05:59:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:13.899 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:13.899 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:13.899 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:13.899 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:13.899 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:13.899 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:13.899 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:13.899 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:13.899 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:13.899 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:13.899 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:13.899 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:13.899 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:13.899 ' 00:34:20.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:20.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:20.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:20.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:20.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:20.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:20.469 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:20.469 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:20.469 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:20.469 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:20.469 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:20.469 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:20.469 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:20.469 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1431962 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1431962 ']' 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1431962 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1431962 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1431962' 00:34:20.469 killing process with pid 1431962 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1431962 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1431962 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1431962 ']' 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1431962 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1431962 ']' 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1431962 00:34:20.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1431962) - No such process 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1431962 is not found' 00:34:20.469 Process with pid 1431962 is not found 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:20.469 00:34:20.469 real 0m17.333s 00:34:20.469 user 0m38.194s 00:34:20.469 sys 0m0.773s 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:20.469 05:59:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:20.469 ************************************ 00:34:20.469 END TEST spdkcli_nvmf_tcp 00:34:20.469 ************************************ 00:34:20.469 05:59:07 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:20.469 05:59:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:20.469 05:59:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:20.469 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:34:20.469 ************************************ 00:34:20.469 START TEST nvmf_identify_passthru 00:34:20.469 ************************************ 00:34:20.469 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:20.469 * Looking for test storage... 00:34:20.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:20.469 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:20.469 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:34:20.469 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:20.469 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:20.469 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:20.469 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.469 --rc genhtml_branch_coverage=1 00:34:20.469 --rc genhtml_function_coverage=1 00:34:20.469 --rc genhtml_legend=1 00:34:20.469 --rc geninfo_all_blocks=1 00:34:20.469 --rc geninfo_unexecuted_blocks=1 00:34:20.469 00:34:20.469 ' 00:34:20.469 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.469 --rc genhtml_branch_coverage=1 00:34:20.469 --rc genhtml_function_coverage=1 00:34:20.469 --rc genhtml_legend=1 00:34:20.469 --rc geninfo_all_blocks=1 00:34:20.469 --rc geninfo_unexecuted_blocks=1 00:34:20.469 00:34:20.469 ' 00:34:20.469 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.469 --rc genhtml_branch_coverage=1 00:34:20.469 --rc genhtml_function_coverage=1 00:34:20.469 --rc genhtml_legend=1 00:34:20.469 --rc geninfo_all_blocks=1 00:34:20.469 --rc geninfo_unexecuted_blocks=1 00:34:20.469 00:34:20.469 ' 00:34:20.469 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.469 --rc genhtml_branch_coverage=1 00:34:20.469 --rc genhtml_function_coverage=1 00:34:20.469 --rc genhtml_legend=1 00:34:20.469 --rc geninfo_all_blocks=1 00:34:20.469 --rc geninfo_unexecuted_blocks=1 00:34:20.469 00:34:20.469 ' 00:34:20.469 05:59:07 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:20.469 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:20.469 05:59:07 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.470 05:59:07 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.470 05:59:07 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.470 05:59:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.470 05:59:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.470 05:59:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.470 05:59:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:20.470 05:59:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:20.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:20.470 05:59:07 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:20.470 05:59:07 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:20.470 05:59:07 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.470 05:59:07 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.470 05:59:07 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.470 05:59:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.470 05:59:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.470 05:59:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.470 05:59:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:20.470 05:59:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.470 05:59:07 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.470 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:20.470 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:20.470 05:59:07 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:20.470 05:59:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:25.783 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.783 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:25.784 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:25.784 Found net devices under 0000:af:00.0: cvl_0_0 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:25.784 Found net devices under 0000:af:00.1: cvl_0_1 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:25.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:34:25.784 00:34:25.784 --- 10.0.0.2 ping statistics --- 00:34:25.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.784 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:34:25.784 00:34:25.784 --- 10.0.0.1 ping statistics --- 00:34:25.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.784 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:25.784 05:59:13 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:25.784 05:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:25.784 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.784 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.784 05:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:25.784 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:25.784 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:25.784 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:25.784 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:25.784 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:25.784 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:25.784 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:25.784 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:25.784 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:26.044 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:26.044 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:34:26.044 05:59:13 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:34:26.044 05:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:34:26.044 05:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:34:26.044 05:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:26.044 05:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:26.044 05:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:30.235 05:59:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:34:30.235 05:59:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:30.235 05:59:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:30.235 05:59:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:34.427 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:34.427 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.427 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.427 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1439269 00:34:34.427 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:34.427 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:34.427 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1439269 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1439269 ']' 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.427 [2024-12-10 05:59:22.118397] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:34:34.427 [2024-12-10 05:59:22.118443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.427 [2024-12-10 05:59:22.192190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:34.427 [2024-12-10 05:59:22.231521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:34.427 [2024-12-10 05:59:22.231560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:34.427 [2024-12-10 05:59:22.231568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:34.427 [2024-12-10 05:59:22.231573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:34.427 [2024-12-10 05:59:22.231577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:34.427 [2024-12-10 05:59:22.233048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.427 [2024-12-10 05:59:22.233158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:34.427 [2024-12-10 05:59:22.233269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.427 [2024-12-10 05:59:22.233270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:34.427 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.427 INFO: Log level set to 20 00:34:34.427 INFO: Requests: 00:34:34.427 { 00:34:34.427 "jsonrpc": "2.0", 00:34:34.427 "method": "nvmf_set_config", 00:34:34.427 "id": 1, 00:34:34.427 "params": { 00:34:34.427 "admin_cmd_passthru": { 00:34:34.427 "identify_ctrlr": true 00:34:34.427 } 00:34:34.427 } 00:34:34.427 } 00:34:34.427 00:34:34.427 INFO: response: 00:34:34.427 { 00:34:34.427 "jsonrpc": "2.0", 00:34:34.427 "id": 1, 00:34:34.427 "result": true 00:34:34.427 } 00:34:34.427 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.427 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.427 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.427 INFO: Setting log level to 20 00:34:34.427 INFO: Setting log level to 20 00:34:34.427 INFO: Log level set to 20 00:34:34.427 INFO: Log level set to 20 00:34:34.427 INFO: Requests: 00:34:34.427 { 00:34:34.427 "jsonrpc": "2.0", 00:34:34.427 "method": "framework_start_init", 00:34:34.427 "id": 1 00:34:34.427 } 00:34:34.427 00:34:34.428 INFO: Requests: 00:34:34.428 { 00:34:34.428 "jsonrpc": "2.0", 00:34:34.428 "method": "framework_start_init", 00:34:34.428 "id": 1 00:34:34.428 } 00:34:34.428 00:34:34.687 [2024-12-10 05:59:22.357397] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:34.687 INFO: response: 00:34:34.687 { 00:34:34.687 "jsonrpc": "2.0", 00:34:34.687 "id": 1, 00:34:34.687 "result": true 00:34:34.687 } 00:34:34.687 00:34:34.687 INFO: response: 00:34:34.687 { 00:34:34.687 "jsonrpc": "2.0", 00:34:34.687 "id": 1, 00:34:34.687 "result": true 00:34:34.687 } 00:34:34.687 00:34:34.687 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.687 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:34.687 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.687 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.687 INFO: Setting log level to 40 00:34:34.687 INFO: Setting log level to 40 00:34:34.687 INFO: Setting log level to 40 00:34:34.687 [2024-12-10 05:59:22.370654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.687 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.687 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:34.687 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.687 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.687 05:59:22 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:34.687 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.687 05:59:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:37.976 Nvme0n1 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:37.976 [2024-12-10 05:59:25.282663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:37.976 [ 00:34:37.976 { 00:34:37.976 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:37.976 "subtype": "Discovery", 00:34:37.976 "listen_addresses": [], 00:34:37.976 "allow_any_host": true, 00:34:37.976 "hosts": [] 00:34:37.976 }, 00:34:37.976 { 00:34:37.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:37.976 "subtype": "NVMe", 00:34:37.976 "listen_addresses": [ 00:34:37.976 { 00:34:37.976 "trtype": "TCP", 00:34:37.976 "adrfam": "IPv4", 00:34:37.976 "traddr": "10.0.0.2", 00:34:37.976 "trsvcid": "4420" 00:34:37.976 } 00:34:37.976 ], 00:34:37.976 "allow_any_host": true, 00:34:37.976 "hosts": [], 00:34:37.976 "serial_number": "SPDK00000000000001", 00:34:37.976 "model_number": "SPDK bdev Controller", 00:34:37.976 "max_namespaces": 1, 00:34:37.976 "min_cntlid": 1, 00:34:37.976 "max_cntlid": 65519, 00:34:37.976 "namespaces": [ 00:34:37.976 { 00:34:37.976 "nsid": 1, 00:34:37.976 "bdev_name": "Nvme0n1", 00:34:37.976 "name": "Nvme0n1", 00:34:37.976 "nguid": "0D4F063F0CE34E399E5CFEF4CCDE99BE", 00:34:37.976 "uuid": "0d4f063f-0ce3-4e39-9e5c-fef4ccde99be" 00:34:37.976 } 00:34:37.976 ] 00:34:37.976 } 00:34:37.976 ] 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:37.976 05:59:25 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:37.976 05:59:25 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:37.976 05:59:25 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:37.976 05:59:25 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:37.976 05:59:25 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:37.976 05:59:25 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:37.976 05:59:25 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:37.976 rmmod nvme_tcp 00:34:37.976 rmmod nvme_fabrics 00:34:37.976 rmmod nvme_keyring 00:34:37.976 05:59:25 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:37.976 05:59:25 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:37.976 05:59:25 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:37.976 05:59:25 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1439269 ']' 00:34:37.976 05:59:25 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1439269 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1439269 ']' 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1439269 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:37.976 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1439269 00:34:38.235 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:38.235 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:38.235 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1439269' 00:34:38.235 killing process with pid 1439269 00:34:38.235 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1439269 00:34:38.235 05:59:25 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1439269 00:34:39.612 05:59:27 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:39.612 05:59:27 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:39.612 05:59:27 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:39.612 05:59:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:39.612 05:59:27 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:39.612 05:59:27 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:39.613 05:59:27 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:39.613 05:59:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:39.613 05:59:27 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:39.613 05:59:27 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.613 05:59:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:39.613 05:59:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.148 05:59:29 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:42.148 00:34:42.148 real 0m21.891s 00:34:42.148 user 0m27.128s 00:34:42.148 sys 0m6.258s 00:34:42.148 05:59:29 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:42.148 05:59:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:42.148 ************************************ 00:34:42.148 END TEST nvmf_identify_passthru 00:34:42.148 ************************************ 00:34:42.148 05:59:29 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:42.148 05:59:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:42.148 05:59:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:42.148 05:59:29 -- common/autotest_common.sh@10 -- # set +x 00:34:42.148 ************************************ 00:34:42.148 START TEST nvmf_dif 00:34:42.148 ************************************ 00:34:42.148 05:59:29 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:42.148 * Looking for test storage... 00:34:42.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:42.148 05:59:29 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:42.148 05:59:29 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:34:42.148 05:59:29 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:42.148 05:59:29 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:42.148 05:59:29 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:42.148 05:59:29 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:42.148 05:59:29 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:42.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:42.148 --rc genhtml_branch_coverage=1 00:34:42.148 --rc genhtml_function_coverage=1 00:34:42.148 --rc genhtml_legend=1 00:34:42.148 --rc geninfo_all_blocks=1 00:34:42.148 --rc geninfo_unexecuted_blocks=1 00:34:42.148 00:34:42.148 ' 00:34:42.148 05:59:29 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:42.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:42.148 --rc genhtml_branch_coverage=1 00:34:42.148 --rc genhtml_function_coverage=1 00:34:42.148 --rc genhtml_legend=1 00:34:42.148 --rc geninfo_all_blocks=1 00:34:42.148 --rc geninfo_unexecuted_blocks=1 00:34:42.148 00:34:42.148 ' 00:34:42.148 05:59:29 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:42.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:42.148 --rc genhtml_branch_coverage=1 00:34:42.148 --rc genhtml_function_coverage=1 00:34:42.148 --rc genhtml_legend=1 00:34:42.148 --rc geninfo_all_blocks=1 00:34:42.148 --rc geninfo_unexecuted_blocks=1 00:34:42.148 00:34:42.149 ' 00:34:42.149 05:59:29 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:42.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:42.149 --rc genhtml_branch_coverage=1 00:34:42.149 --rc genhtml_function_coverage=1 00:34:42.149 --rc genhtml_legend=1 00:34:42.149 --rc geninfo_all_blocks=1 00:34:42.149 --rc geninfo_unexecuted_blocks=1 00:34:42.149 00:34:42.149 ' 00:34:42.149 05:59:29 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:42.149 05:59:29 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:42.149 05:59:29 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:42.149 05:59:29 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:42.149 05:59:29 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:42.149 05:59:29 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.149 05:59:29 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.149 05:59:29 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.149 05:59:29 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:42.149 05:59:29 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:42.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:42.149 05:59:29 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:42.149 05:59:29 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:42.149 05:59:29 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:42.149 05:59:29 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:42.149 05:59:29 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.149 05:59:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:42.149 05:59:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:42.149 05:59:29 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:42.149 05:59:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:47.423 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:47.423 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.423 05:59:35 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:47.424 Found net devices under 0000:af:00.0: cvl_0_0 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:47.424 Found net devices under 0000:af:00.1: cvl_0_1 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.424 05:59:35 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:47.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:34:47.683 00:34:47.683 --- 10.0.0.2 ping statistics --- 00:34:47.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.683 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:34:47.683 00:34:47.683 --- 10.0.0.1 ping statistics --- 00:34:47.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.683 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:47.683 05:59:35 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:50.972 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:50.972 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:50.972 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:50.972 05:59:38 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:50.972 05:59:38 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1444645 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1444645 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1444645 ']' 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:50.972 [2024-12-10 05:59:38.489493] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:34:50.972 [2024-12-10 05:59:38.489537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:50.972 [2024-12-10 05:59:38.567477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.972 [2024-12-10 05:59:38.606542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:50.972 [2024-12-10 05:59:38.606576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:50.972 [2024-12-10 05:59:38.606583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:50.972 [2024-12-10 05:59:38.606590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:50.972 [2024-12-10 05:59:38.606595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:50.972 [2024-12-10 05:59:38.607078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:50.972 05:59:38 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:50.972 05:59:38 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:50.972 05:59:38 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:50.972 [2024-12-10 05:59:38.741859] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.972 05:59:38 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:50.972 05:59:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:50.972 ************************************ 00:34:50.972 START TEST fio_dif_1_default 00:34:50.972 ************************************ 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:50.972 bdev_null0 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.972 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:50.973 [2024-12-10 05:59:38.814136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:50.973 { 00:34:50.973 "params": { 00:34:50.973 "name": "Nvme$subsystem", 00:34:50.973 "trtype": "$TEST_TRANSPORT", 00:34:50.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.973 "adrfam": "ipv4", 00:34:50.973 "trsvcid": "$NVMF_PORT", 00:34:50.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.973 "hdgst": ${hdgst:-false}, 00:34:50.973 "ddgst": ${ddgst:-false} 00:34:50.973 }, 00:34:50.973 "method": "bdev_nvme_attach_controller" 00:34:50.973 } 00:34:50.973 EOF 00:34:50.973 )") 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:50.973 "params": { 00:34:50.973 "name": "Nvme0", 00:34:50.973 "trtype": "tcp", 00:34:50.973 "traddr": "10.0.0.2", 00:34:50.973 "adrfam": "ipv4", 00:34:50.973 "trsvcid": "4420", 00:34:50.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:50.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:50.973 "hdgst": false, 00:34:50.973 "ddgst": false 00:34:50.973 }, 00:34:50.973 "method": "bdev_nvme_attach_controller" 00:34:50.973 }' 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:50.973 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:51.251 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:51.251 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:51.251 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:51.251 05:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.513 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:51.513 fio-3.35 00:34:51.513 Starting 1 thread 00:35:03.710 00:35:03.710 filename0: (groupid=0, jobs=1): err= 0: pid=1445006: Tue Dec 10 05:59:49 2024 00:35:03.710 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:35:03.710 slat (nsec): min=5811, max=27021, avg=6256.63, stdev=1398.15 00:35:03.710 clat (usec): min=40787, max=46667, avg=41018.35, stdev=381.41 00:35:03.710 lat (usec): min=40794, max=46694, avg=41024.61, stdev=381.89 00:35:03.711 clat percentiles (usec): 00:35:03.711 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:03.711 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:03.711 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:03.711 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:35:03.711 | 99.99th=[46924] 00:35:03.711 bw ( KiB/s): min= 384, max= 416, per=99.51%, avg=388.80, stdev=11.72, samples=20 00:35:03.711 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:03.711 lat (msec) : 50=100.00% 00:35:03.711 cpu : usr=92.30%, sys=7.47%, ctx=13, majf=0, minf=0 00:35:03.711 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.711 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.711 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:03.711 00:35:03.711 Run status group 0 (all jobs): 00:35:03.711 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10013-10013msec 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.711 00:35:03.711 real 0m11.183s 00:35:03.711 user 0m16.041s 00:35:03.711 sys 0m1.038s 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.711 05:59:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 ************************************ 00:35:03.711 END TEST fio_dif_1_default 00:35:03.711 ************************************ 00:35:03.711 05:59:49 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:03.711 05:59:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:03.711 05:59:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.711 05:59:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 ************************************ 00:35:03.711 START TEST fio_dif_1_multi_subsystems 00:35:03.711 ************************************ 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 bdev_null0 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 [2024-12-10 05:59:50.075238] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 bdev_null1 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:03.711 { 00:35:03.711 "params": { 00:35:03.711 "name": "Nvme$subsystem", 00:35:03.711 "trtype": "$TEST_TRANSPORT", 00:35:03.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.711 "adrfam": "ipv4", 00:35:03.711 "trsvcid": "$NVMF_PORT", 00:35:03.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.711 "hdgst": ${hdgst:-false}, 00:35:03.711 "ddgst": ${ddgst:-false} 00:35:03.711 }, 00:35:03.711 "method": "bdev_nvme_attach_controller" 00:35:03.711 } 00:35:03.711 EOF 00:35:03.711 )") 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:03.711 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:03.712 { 00:35:03.712 "params": { 00:35:03.712 "name": "Nvme$subsystem", 00:35:03.712 "trtype": "$TEST_TRANSPORT", 00:35:03.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.712 "adrfam": "ipv4", 00:35:03.712 "trsvcid": "$NVMF_PORT", 00:35:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.712 "hdgst": ${hdgst:-false}, 00:35:03.712 "ddgst": ${ddgst:-false} 00:35:03.712 }, 00:35:03.712 "method": "bdev_nvme_attach_controller" 00:35:03.712 } 00:35:03.712 EOF 00:35:03.712 )") 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:03.712 "params": { 00:35:03.712 "name": "Nvme0", 00:35:03.712 "trtype": "tcp", 00:35:03.712 "traddr": "10.0.0.2", 00:35:03.712 "adrfam": "ipv4", 00:35:03.712 "trsvcid": "4420", 00:35:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:03.712 "hdgst": false, 00:35:03.712 "ddgst": false 00:35:03.712 }, 00:35:03.712 "method": "bdev_nvme_attach_controller" 00:35:03.712 },{ 00:35:03.712 "params": { 00:35:03.712 "name": "Nvme1", 00:35:03.712 "trtype": "tcp", 00:35:03.712 "traddr": "10.0.0.2", 00:35:03.712 "adrfam": "ipv4", 00:35:03.712 "trsvcid": "4420", 00:35:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:03.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:03.712 "hdgst": false, 00:35:03.712 "ddgst": false 00:35:03.712 }, 00:35:03.712 "method": "bdev_nvme_attach_controller" 00:35:03.712 }' 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:03.712 05:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.712 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:03.712 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:03.712 fio-3.35 00:35:03.712 Starting 2 threads 00:35:13.685 00:35:13.685 filename0: (groupid=0, jobs=1): err= 0: pid=1446925: Tue Dec 10 06:00:01 2024 00:35:13.685 read: IOPS=95, BW=382KiB/s (392kB/s)(3840KiB/10042msec) 00:35:13.685 slat (nsec): min=5906, max=55217, avg=10494.40, stdev=7823.86 00:35:13.685 clat (usec): min=40877, max=42259, avg=41804.47, stdev=365.43 00:35:13.685 lat (usec): min=40883, max=42291, avg=41814.97, stdev=365.54 00:35:13.685 clat percentiles (usec): 00:35:13.685 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:35:13.685 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:13.685 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:13.685 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:13.685 | 99.99th=[42206] 00:35:13.685 bw ( KiB/s): min= 352, max= 384, per=33.77%, avg=382.40, stdev= 7.16, samples=20 00:35:13.685 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:35:13.685 lat (msec) : 50=100.00% 00:35:13.685 cpu : usr=97.50%, sys=2.20%, ctx=15, majf=0, minf=84 00:35:13.685 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.685 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.685 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:13.685 filename1: (groupid=0, jobs=1): err= 0: pid=1446926: Tue Dec 10 06:00:01 2024 00:35:13.685 read: IOPS=187, BW=751KiB/s (769kB/s)(7520KiB/10008msec) 00:35:13.685 slat (nsec): min=6039, max=61039, avg=8599.46, stdev=5573.78 00:35:13.685 clat (usec): min=463, max=42523, avg=21266.24, stdev=20539.20 00:35:13.685 lat (usec): min=469, max=42530, avg=21274.84, stdev=20537.59 00:35:13.685 clat percentiles (usec): 00:35:13.685 | 1.00th=[ 506], 5.00th=[ 603], 10.00th=[ 619], 20.00th=[ 635], 00:35:13.685 | 30.00th=[ 644], 40.00th=[ 660], 50.00th=[41157], 60.00th=[41157], 00:35:13.685 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:13.685 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:13.685 | 99.99th=[42730] 00:35:13.685 bw ( KiB/s): min= 672, max= 768, per=66.30%, avg=750.40, stdev=30.22, samples=20 00:35:13.685 iops : min= 168, max= 192, avg=187.60, stdev= 7.56, samples=20 00:35:13.685 lat (usec) : 500=0.85%, 750=43.56%, 1000=5.37% 00:35:13.685 lat (msec) : 50=50.21% 00:35:13.685 cpu : usr=98.34%, sys=1.37%, ctx=28, majf=0, minf=40 00:35:13.685 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.685 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.685 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:13.685 00:35:13.685 Run status group 0 (all jobs): 00:35:13.685 READ: bw=1131KiB/s (1158kB/s), 382KiB/s-751KiB/s (392kB/s-769kB/s), io=11.1MiB (11.6MB), run=10008-10042msec 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.685 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.944 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.944 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:13.944 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.944 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.944 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.944 00:35:13.944 real 0m11.548s 00:35:13.944 user 0m27.018s 00:35:13.944 sys 0m0.655s 00:35:13.944 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.944 06:00:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.944 ************************************ 00:35:13.944 END TEST fio_dif_1_multi_subsystems 00:35:13.944 ************************************ 00:35:13.944 06:00:01 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:13.944 06:00:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:13.944 06:00:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.944 06:00:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:13.944 ************************************ 00:35:13.944 START TEST fio_dif_rand_params 00:35:13.944 ************************************ 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.944 bdev_null0 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.944 [2024-12-10 06:00:01.698670] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:13.944 { 00:35:13.944 "params": { 00:35:13.944 "name": "Nvme$subsystem", 00:35:13.944 "trtype": "$TEST_TRANSPORT", 00:35:13.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.944 "adrfam": "ipv4", 00:35:13.944 "trsvcid": "$NVMF_PORT", 00:35:13.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.944 "hdgst": ${hdgst:-false}, 00:35:13.944 "ddgst": ${ddgst:-false} 00:35:13.944 }, 00:35:13.944 "method": "bdev_nvme_attach_controller" 00:35:13.944 } 00:35:13.944 EOF 00:35:13.944 )") 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:13.944 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:13.945 "params": { 00:35:13.945 "name": "Nvme0", 00:35:13.945 "trtype": "tcp", 00:35:13.945 "traddr": "10.0.0.2", 00:35:13.945 "adrfam": "ipv4", 00:35:13.945 "trsvcid": "4420", 00:35:13.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:13.945 "hdgst": false, 00:35:13.945 "ddgst": false 00:35:13.945 }, 00:35:13.945 "method": "bdev_nvme_attach_controller" 00:35:13.945 }' 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:13.945 06:00:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.203 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:14.203 ... 00:35:14.203 fio-3.35 00:35:14.203 Starting 3 threads 00:35:20.763 00:35:20.763 filename0: (groupid=0, jobs=1): err= 0: pid=1448963: Tue Dec 10 06:00:07 2024 00:35:20.763 read: IOPS=342, BW=42.8MiB/s (44.9MB/s)(216MiB/5045msec) 00:35:20.763 slat (nsec): min=6164, max=27400, avg=10451.69, stdev=2158.41 00:35:20.763 clat (usec): min=3369, max=51573, avg=8717.07, stdev=4916.56 00:35:20.763 lat (usec): min=3376, max=51584, avg=8727.52, stdev=4916.87 00:35:20.763 clat percentiles (usec): 00:35:20.763 | 1.00th=[ 3654], 5.00th=[ 5276], 10.00th=[ 5997], 20.00th=[ 6456], 00:35:20.763 | 30.00th=[ 6980], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 8979], 00:35:20.763 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[11076], 00:35:20.763 | 99.00th=[46400], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:35:20.763 | 99.99th=[51643] 00:35:20.763 bw ( KiB/s): min=39680, max=49408, per=37.74%, avg=44202.80, stdev=3482.92, samples=10 00:35:20.763 iops : min= 310, max= 386, avg=345.30, stdev=27.24, samples=10 00:35:20.763 lat (msec) : 4=2.54%, 10=81.38%, 20=14.75%, 50=1.16%, 100=0.17% 00:35:20.763 cpu : usr=94.49%, sys=5.19%, ctx=10, majf=0, minf=41 00:35:20.763 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.763 issued rwts: total=1729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.763 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:20.763 filename0: (groupid=0, jobs=1): err= 0: pid=1448964: Tue Dec 10 06:00:07 2024 00:35:20.763 read: IOPS=299, BW=37.5MiB/s (39.3MB/s)(188MiB/5012msec) 00:35:20.763 slat (nsec): min=6154, max=27053, avg=10683.56, stdev=2192.93 00:35:20.763 clat (usec): min=3497, max=51213, avg=9991.02, stdev=7842.78 00:35:20.763 lat (usec): min=3504, max=51225, avg=10001.70, stdev=7842.81 00:35:20.763 clat percentiles (usec): 00:35:20.763 | 1.00th=[ 3851], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 6849], 00:35:20.763 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:35:20.763 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10683], 00:35:20.763 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:35:20.763 | 99.99th=[51119] 00:35:20.763 bw ( KiB/s): min=31488, max=48384, per=32.79%, avg=38400.00, stdev=6149.92, samples=10 00:35:20.763 iops : min= 246, max= 378, avg=300.00, stdev=48.05, samples=10 00:35:20.763 lat (msec) : 4=1.66%, 10=82.57%, 20=11.98%, 50=3.13%, 100=0.67% 00:35:20.763 cpu : usr=95.11%, sys=4.53%, ctx=10, majf=0, minf=40 00:35:20.763 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.763 issued rwts: total=1503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.763 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:20.763 filename0: (groupid=0, jobs=1): err= 0: pid=1448965: Tue Dec 10 06:00:07 2024 00:35:20.763 read: IOPS=276, BW=34.6MiB/s (36.3MB/s)(173MiB/5003msec) 00:35:20.763 slat (nsec): min=6151, max=27271, avg=10765.78, stdev=2132.60 00:35:20.763 clat (usec): min=3451, max=52646, avg=10831.43, stdev=8766.77 00:35:20.763 lat (usec): min=3458, max=52659, avg=10842.20, stdev=8766.79 00:35:20.763 clat percentiles (usec): 00:35:20.763 | 1.00th=[ 3982], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 7177], 00:35:20.763 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:35:20.763 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[12125], 00:35:20.763 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52167], 99.95th=[52691], 00:35:20.763 | 99.99th=[52691] 00:35:20.763 bw ( KiB/s): min=23040, max=41984, per=29.41%, avg=34446.22, stdev=6785.34, samples=9 00:35:20.763 iops : min= 180, max= 328, avg=269.11, stdev=53.01, samples=9 00:35:20.764 lat (msec) : 4=1.01%, 10=69.15%, 20=25.07%, 50=2.96%, 100=1.81% 00:35:20.764 cpu : usr=94.28%, sys=5.38%, ctx=7, majf=0, minf=61 00:35:20.764 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.764 issued rwts: total=1384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.764 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:20.764 00:35:20.764 Run status group 0 (all jobs): 00:35:20.764 READ: bw=114MiB/s (120MB/s), 34.6MiB/s-42.8MiB/s (36.3MB/s-44.9MB/s), io=577MiB (605MB), run=5003-5045msec 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 bdev_null0 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 [2024-12-10 06:00:07.825564] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 bdev_null1 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 bdev_null2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:20.764 { 00:35:20.764 "params": { 00:35:20.764 "name": "Nvme$subsystem", 00:35:20.764 "trtype": "$TEST_TRANSPORT", 00:35:20.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.764 "adrfam": "ipv4", 00:35:20.764 "trsvcid": "$NVMF_PORT", 00:35:20.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.764 "hdgst": ${hdgst:-false}, 00:35:20.764 "ddgst": ${ddgst:-false} 00:35:20.764 }, 00:35:20.764 "method": "bdev_nvme_attach_controller" 00:35:20.764 } 00:35:20.764 EOF 00:35:20.764 )") 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:20.764 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:20.765 { 00:35:20.765 "params": { 00:35:20.765 "name": "Nvme$subsystem", 00:35:20.765 "trtype": "$TEST_TRANSPORT", 00:35:20.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.765 "adrfam": "ipv4", 00:35:20.765 "trsvcid": "$NVMF_PORT", 00:35:20.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.765 "hdgst": ${hdgst:-false}, 00:35:20.765 "ddgst": ${ddgst:-false} 00:35:20.765 }, 00:35:20.765 "method": "bdev_nvme_attach_controller" 00:35:20.765 } 00:35:20.765 EOF 00:35:20.765 )") 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:20.765 { 00:35:20.765 "params": { 00:35:20.765 "name": "Nvme$subsystem", 00:35:20.765 "trtype": "$TEST_TRANSPORT", 00:35:20.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.765 "adrfam": "ipv4", 00:35:20.765 "trsvcid": "$NVMF_PORT", 00:35:20.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.765 "hdgst": ${hdgst:-false}, 00:35:20.765 "ddgst": ${ddgst:-false} 00:35:20.765 }, 00:35:20.765 "method": "bdev_nvme_attach_controller" 00:35:20.765 } 00:35:20.765 EOF 00:35:20.765 )") 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:20.765 "params": { 00:35:20.765 "name": "Nvme0", 00:35:20.765 "trtype": "tcp", 00:35:20.765 "traddr": "10.0.0.2", 00:35:20.765 "adrfam": "ipv4", 00:35:20.765 "trsvcid": "4420", 00:35:20.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:20.765 "hdgst": false, 00:35:20.765 "ddgst": false 00:35:20.765 }, 00:35:20.765 "method": "bdev_nvme_attach_controller" 00:35:20.765 },{ 00:35:20.765 "params": { 00:35:20.765 "name": "Nvme1", 00:35:20.765 "trtype": "tcp", 00:35:20.765 "traddr": "10.0.0.2", 00:35:20.765 "adrfam": "ipv4", 00:35:20.765 "trsvcid": "4420", 00:35:20.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:20.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:20.765 "hdgst": false, 00:35:20.765 "ddgst": false 00:35:20.765 }, 00:35:20.765 "method": "bdev_nvme_attach_controller" 00:35:20.765 },{ 00:35:20.765 "params": { 00:35:20.765 "name": "Nvme2", 00:35:20.765 "trtype": "tcp", 00:35:20.765 "traddr": "10.0.0.2", 00:35:20.765 "adrfam": "ipv4", 00:35:20.765 "trsvcid": "4420", 00:35:20.765 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:20.765 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:20.765 "hdgst": false, 00:35:20.765 "ddgst": false 00:35:20.765 }, 00:35:20.765 "method": "bdev_nvme_attach_controller" 00:35:20.765 }' 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:20.765 06:00:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.765 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:20.765 ... 00:35:20.765 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:20.765 ... 00:35:20.765 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:20.765 ... 00:35:20.765 fio-3.35 00:35:20.765 Starting 24 threads 00:35:33.074 00:35:33.074 filename0: (groupid=0, jobs=1): err= 0: pid=1450593: Tue Dec 10 06:00:19 2024 00:35:33.074 read: IOPS=525, BW=2100KiB/s (2151kB/s)(20.6MiB/10021msec) 00:35:33.074 slat (nsec): min=7436, max=77245, avg=19787.91, stdev=7512.10 00:35:33.074 clat (usec): min=16906, max=43730, avg=30317.86, stdev=1066.40 00:35:33.074 lat (usec): min=16931, max=43748, avg=30337.65, stdev=1066.66 00:35:33.074 clat percentiles (usec): 00:35:33.074 | 1.00th=[28967], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:33.074 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.074 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:35:33.074 | 99.00th=[31851], 99.50th=[34341], 99.90th=[43779], 99.95th=[43779], 00:35:33.074 | 99.99th=[43779] 00:35:33.074 bw ( KiB/s): min= 2036, max= 2176, per=4.16%, avg=2098.60, stdev=55.49, samples=20 00:35:33.074 iops : min= 509, max= 544, avg=524.65, stdev=13.87, samples=20 00:35:33.074 lat (msec) : 20=0.15%, 50=99.85% 00:35:33.074 cpu : usr=98.37%, sys=1.26%, ctx=10, majf=0, minf=22 00:35:33.074 IO depths : 1=0.1%, 2=6.2%, 4=24.9%, 8=56.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:33.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 issued rwts: total=5262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.074 filename0: (groupid=0, jobs=1): err= 0: pid=1450594: Tue Dec 10 06:00:19 2024 00:35:33.074 read: IOPS=527, BW=2109KiB/s (2160kB/s)(20.6MiB/10013msec) 00:35:33.074 slat (usec): min=7, max=322, avg=47.44, stdev=25.66 00:35:33.074 clat (usec): min=7859, max=32420, avg=29921.94, stdev=1403.71 00:35:33.074 lat (usec): min=8049, max=32502, avg=29969.38, stdev=1393.58 00:35:33.074 clat percentiles (usec): 00:35:33.074 | 1.00th=[27657], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:35:33.074 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:33.074 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.074 | 99.00th=[31065], 99.50th=[31327], 99.90th=[32113], 99.95th=[32375], 00:35:33.074 | 99.99th=[32375] 00:35:33.074 bw ( KiB/s): min= 2048, max= 2180, per=4.18%, avg=2106.00, stdev=65.39, samples=20 00:35:33.074 iops : min= 512, max= 545, avg=526.50, stdev=16.35, samples=20 00:35:33.074 lat (msec) : 10=0.17%, 20=0.30%, 50=99.53% 00:35:33.074 cpu : usr=98.45%, sys=1.14%, ctx=18, majf=0, minf=15 00:35:33.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.074 filename0: (groupid=0, jobs=1): err= 0: pid=1450595: Tue Dec 10 06:00:19 2024 00:35:33.074 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10021msec) 00:35:33.074 slat (nsec): min=7487, max=42331, avg=18824.92, stdev=5811.70 00:35:33.074 clat (usec): min=17914, max=43755, avg=30072.84, stdev=1652.28 00:35:33.074 lat (usec): min=17922, max=43771, avg=30091.67, stdev=1653.64 00:35:33.074 clat percentiles (usec): 00:35:33.074 | 1.00th=[21627], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:33.074 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.074 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.074 | 99.00th=[31851], 99.50th=[34341], 99.90th=[43779], 99.95th=[43779], 00:35:33.074 | 99.99th=[43779] 00:35:33.074 bw ( KiB/s): min= 1968, max= 2432, per=4.20%, avg=2114.40, stdev=101.76, samples=20 00:35:33.074 iops : min= 492, max= 608, avg=528.60, stdev=25.44, samples=20 00:35:33.074 lat (msec) : 20=0.60%, 50=99.40% 00:35:33.074 cpu : usr=98.31%, sys=1.31%, ctx=13, majf=0, minf=12 00:35:33.074 IO depths : 1=5.8%, 2=11.8%, 4=24.5%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:33.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 issued rwts: total=5302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.074 filename0: (groupid=0, jobs=1): err= 0: pid=1450596: Tue Dec 10 06:00:19 2024 00:35:33.074 read: IOPS=525, BW=2103KiB/s (2153kB/s)(20.6MiB/10014msec) 00:35:33.074 slat (nsec): min=6356, max=87272, avg=31668.91, stdev=15068.17 00:35:33.074 clat (usec): min=19039, max=35195, avg=30175.01, stdev=714.31 00:35:33.074 lat (usec): min=19066, max=35212, avg=30206.68, stdev=711.73 00:35:33.074 clat percentiles (usec): 00:35:33.074 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:33.074 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:33.074 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.074 | 99.00th=[31327], 99.50th=[31589], 99.90th=[35390], 99.95th=[35390], 00:35:33.074 | 99.99th=[35390] 00:35:33.074 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2097.45, stdev=62.59, samples=20 00:35:33.074 iops : min= 512, max= 544, avg=524.35, stdev=15.64, samples=20 00:35:33.074 lat (msec) : 20=0.30%, 50=99.70% 00:35:33.074 cpu : usr=98.49%, sys=1.13%, ctx=16, majf=0, minf=18 00:35:33.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.074 filename0: (groupid=0, jobs=1): err= 0: pid=1450597: Tue Dec 10 06:00:19 2024 00:35:33.074 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10006msec) 00:35:33.074 slat (nsec): min=7630, max=88586, avg=32802.88, stdev=15721.54 00:35:33.074 clat (usec): min=19051, max=63370, avg=30195.94, stdev=1670.06 00:35:33.074 lat (usec): min=19074, max=63382, avg=30228.74, stdev=1668.70 00:35:33.074 clat percentiles (usec): 00:35:33.074 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:33.074 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:33.074 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:35:33.074 | 99.00th=[31327], 99.50th=[31589], 99.90th=[56886], 99.95th=[56886], 00:35:33.074 | 99.99th=[63177] 00:35:33.074 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2092.80, stdev=74.48, samples=20 00:35:33.074 iops : min= 480, max= 544, avg=523.10, stdev=18.68, samples=20 00:35:33.074 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:35:33.074 cpu : usr=98.33%, sys=1.29%, ctx=14, majf=0, minf=16 00:35:33.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.074 filename0: (groupid=0, jobs=1): err= 0: pid=1450598: Tue Dec 10 06:00:19 2024 00:35:33.074 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10006msec) 00:35:33.074 slat (nsec): min=7700, max=84243, avg=32979.64, stdev=15323.29 00:35:33.074 clat (usec): min=18984, max=63362, avg=30228.58, stdev=1665.54 00:35:33.074 lat (usec): min=19023, max=63375, avg=30261.55, stdev=1663.41 00:35:33.074 clat percentiles (usec): 00:35:33.074 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:33.074 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:33.074 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.074 | 99.00th=[31327], 99.50th=[31589], 99.90th=[56886], 99.95th=[56886], 00:35:33.074 | 99.99th=[63177] 00:35:33.074 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2092.80, stdev=75.15, samples=20 00:35:33.074 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:33.074 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:35:33.074 cpu : usr=98.52%, sys=1.11%, ctx=13, majf=0, minf=30 00:35:33.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.074 filename0: (groupid=0, jobs=1): err= 0: pid=1450599: Tue Dec 10 06:00:19 2024 00:35:33.074 read: IOPS=528, BW=2113KiB/s (2164kB/s)(20.7MiB/10026msec) 00:35:33.074 slat (nsec): min=7681, max=40015, avg=19820.88, stdev=5860.95 00:35:33.074 clat (usec): min=13078, max=32236, avg=30111.30, stdev=1501.15 00:35:33.074 lat (usec): min=13101, max=32257, avg=30131.12, stdev=1501.59 00:35:33.074 clat percentiles (usec): 00:35:33.074 | 1.00th=[20579], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:33.074 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.074 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.074 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:35:33.074 | 99.99th=[32113] 00:35:33.074 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2112.40, stdev=77.36, samples=20 00:35:33.074 iops : min= 512, max= 576, avg=528.10, stdev=19.34, samples=20 00:35:33.074 lat (msec) : 20=0.94%, 50=99.06% 00:35:33.074 cpu : usr=98.33%, sys=1.29%, ctx=16, majf=0, minf=22 00:35:33.074 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.074 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.075 filename0: (groupid=0, jobs=1): err= 0: pid=1450600: Tue Dec 10 06:00:19 2024 00:35:33.075 read: IOPS=523, BW=2095KiB/s (2145kB/s)(20.5MiB/10045msec) 00:35:33.075 slat (nsec): min=7354, max=56916, avg=18271.02, stdev=6936.73 00:35:33.075 clat (usec): min=18855, max=58371, avg=30303.00, stdev=2052.43 00:35:33.075 lat (usec): min=18864, max=58385, avg=30321.27, stdev=2052.35 00:35:33.075 clat percentiles (usec): 00:35:33.075 | 1.00th=[22414], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:33.075 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.075 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:35:33.075 | 99.00th=[31589], 99.50th=[43254], 99.90th=[58459], 99.95th=[58459], 00:35:33.075 | 99.99th=[58459] 00:35:33.075 bw ( KiB/s): min= 1920, max= 2167, per=4.17%, avg=2102.75, stdev=65.68, samples=20 00:35:33.075 iops : min= 480, max= 541, avg=525.65, stdev=16.38, samples=20 00:35:33.075 lat (msec) : 20=0.19%, 50=99.47%, 100=0.34% 00:35:33.075 cpu : usr=98.45%, sys=1.18%, ctx=14, majf=0, minf=24 00:35:33.075 IO depths : 1=0.3%, 2=6.4%, 4=24.6%, 8=56.5%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:33.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 issued rwts: total=5260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.075 filename1: (groupid=0, jobs=1): err= 0: pid=1450601: Tue Dec 10 06:00:19 2024 00:35:33.075 read: IOPS=528, BW=2113KiB/s (2164kB/s)(20.7MiB/10026msec) 00:35:33.075 slat (nsec): min=7502, max=62776, avg=16269.37, stdev=6836.50 00:35:33.075 clat (usec): min=12617, max=32259, avg=30157.91, stdev=1540.40 00:35:33.075 lat (usec): min=12637, max=32276, avg=30174.18, stdev=1539.27 00:35:33.075 clat percentiles (usec): 00:35:33.075 | 1.00th=[20579], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:33.075 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.075 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.075 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:35:33.075 | 99.99th=[32375] 00:35:33.075 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2112.40, stdev=77.36, samples=20 00:35:33.075 iops : min= 512, max= 576, avg=528.10, stdev=19.34, samples=20 00:35:33.075 lat (msec) : 20=0.91%, 50=99.09% 00:35:33.075 cpu : usr=98.42%, sys=1.20%, ctx=14, majf=0, minf=29 00:35:33.075 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.075 filename1: (groupid=0, jobs=1): err= 0: pid=1450602: Tue Dec 10 06:00:19 2024 00:35:33.075 read: IOPS=524, BW=2098KiB/s (2149kB/s)(20.5MiB/10005msec) 00:35:33.075 slat (nsec): min=4581, max=86254, avg=32739.84, stdev=15457.61 00:35:33.075 clat (usec): min=19049, max=62541, avg=30201.37, stdev=1658.78 00:35:33.075 lat (usec): min=19084, max=62555, avg=30234.10, stdev=1657.31 00:35:33.075 clat percentiles (usec): 00:35:33.075 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:33.075 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:33.075 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.075 | 99.00th=[31589], 99.50th=[32113], 99.90th=[56361], 99.95th=[56361], 00:35:33.075 | 99.99th=[62653] 00:35:33.075 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2092.80, stdev=75.15, samples=20 00:35:33.075 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:33.075 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:35:33.075 cpu : usr=98.45%, sys=1.17%, ctx=14, majf=0, minf=24 00:35:33.075 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:33.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.075 filename1: (groupid=0, jobs=1): err= 0: pid=1450603: Tue Dec 10 06:00:19 2024 00:35:33.075 read: IOPS=525, BW=2101KiB/s (2152kB/s)(20.6MiB/10016msec) 00:35:33.075 slat (nsec): min=6223, max=85909, avg=28843.40, stdev=14911.86 00:35:33.075 clat (usec): min=19548, max=38051, avg=30251.58, stdev=748.07 00:35:33.075 lat (usec): min=19611, max=38085, avg=30280.42, stdev=745.17 00:35:33.075 clat percentiles (usec): 00:35:33.075 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:33.075 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.075 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.075 | 99.00th=[31589], 99.50th=[31851], 99.90th=[36963], 99.95th=[36963], 00:35:33.075 | 99.99th=[38011] 00:35:33.075 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2097.05, stdev=55.94, samples=20 00:35:33.075 iops : min= 512, max= 544, avg=524.25, stdev=13.98, samples=20 00:35:33.075 lat (msec) : 20=0.21%, 50=99.79% 00:35:33.075 cpu : usr=98.21%, sys=1.20%, ctx=80, majf=0, minf=25 00:35:33.075 IO depths : 1=0.1%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:35:33.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 issued rwts: total=5262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.075 filename1: (groupid=0, jobs=1): err= 0: pid=1450604: Tue Dec 10 06:00:19 2024 00:35:33.075 read: IOPS=525, BW=2103KiB/s (2153kB/s)(20.6MiB/10025msec) 00:35:33.075 slat (nsec): min=7660, max=43633, avg=20285.12, stdev=6663.21 00:35:33.075 clat (usec): min=17369, max=48183, avg=30243.08, stdev=1119.45 00:35:33.075 lat (usec): min=17379, max=48197, avg=30263.36, stdev=1119.80 00:35:33.075 clat percentiles (usec): 00:35:33.075 | 1.00th=[26608], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:33.075 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.075 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.075 | 99.00th=[31851], 99.50th=[33162], 99.90th=[47973], 99.95th=[47973], 00:35:33.075 | 99.99th=[47973] 00:35:33.075 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2101.60, stdev=63.21, samples=20 00:35:33.075 iops : min= 512, max= 544, avg=525.40, stdev=15.80, samples=20 00:35:33.075 lat (msec) : 20=0.04%, 50=99.96% 00:35:33.075 cpu : usr=98.62%, sys=1.01%, ctx=14, majf=0, minf=22 00:35:33.075 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:33.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 issued rwts: total=5270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.075 filename1: (groupid=0, jobs=1): err= 0: pid=1450605: Tue Dec 10 06:00:19 2024 00:35:33.075 read: IOPS=527, BW=2112KiB/s (2162kB/s)(20.6MiB/10013msec) 00:35:33.075 slat (nsec): min=7498, max=46502, avg=18836.13, stdev=6273.54 00:35:33.075 clat (usec): min=12747, max=48641, avg=30151.24, stdev=2171.00 00:35:33.075 lat (usec): min=12761, max=48655, avg=30170.08, stdev=2171.07 00:35:33.075 clat percentiles (usec): 00:35:33.075 | 1.00th=[20579], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:33.075 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.075 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:33.075 | 99.00th=[39060], 99.50th=[39060], 99.90th=[48497], 99.95th=[48497], 00:35:33.075 | 99.99th=[48497] 00:35:33.075 bw ( KiB/s): min= 2032, max= 2304, per=4.18%, avg=2108.40, stdev=76.66, samples=20 00:35:33.075 iops : min= 508, max= 576, avg=527.10, stdev=19.16, samples=20 00:35:33.075 lat (msec) : 20=0.95%, 50=99.05% 00:35:33.075 cpu : usr=98.38%, sys=1.24%, ctx=16, majf=0, minf=28 00:35:33.075 IO depths : 1=5.4%, 2=11.6%, 4=24.9%, 8=51.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:33.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 issued rwts: total=5286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.075 filename1: (groupid=0, jobs=1): err= 0: pid=1450606: Tue Dec 10 06:00:19 2024 00:35:33.075 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10006msec) 00:35:33.075 slat (nsec): min=4192, max=95142, avg=32168.05, stdev=15616.84 00:35:33.075 clat (usec): min=19020, max=57297, avg=30197.22, stdev=1634.90 00:35:33.075 lat (usec): min=19040, max=57311, avg=30229.39, stdev=1633.47 00:35:33.075 clat percentiles (usec): 00:35:33.075 | 1.00th=[29754], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:33.075 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:33.075 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:35:33.075 | 99.00th=[31327], 99.50th=[31589], 99.90th=[57410], 99.95th=[57410], 00:35:33.075 | 99.99th=[57410] 00:35:33.075 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2092.50, stdev=75.10, samples=20 00:35:33.075 iops : min= 480, max= 544, avg=523.05, stdev=18.89, samples=20 00:35:33.075 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:35:33.075 cpu : usr=98.52%, sys=1.10%, ctx=19, majf=0, minf=18 00:35:33.075 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.075 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.075 filename1: (groupid=0, jobs=1): err= 0: pid=1450607: Tue Dec 10 06:00:19 2024 00:35:33.075 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10009msec) 00:35:33.075 slat (nsec): min=5021, max=41518, avg=18336.42, stdev=5959.00 00:35:33.075 clat (usec): min=19896, max=40250, avg=30273.54, stdev=1126.89 00:35:33.075 lat (usec): min=19904, max=40273, avg=30291.88, stdev=1127.16 00:35:33.075 clat percentiles (usec): 00:35:33.075 | 1.00th=[28443], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:33.075 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.075 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:35:33.075 | 99.00th=[31851], 99.50th=[37487], 99.90th=[40109], 99.95th=[40109], 00:35:33.075 | 99.99th=[40109] 00:35:33.075 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2099.20, stdev=59.78, samples=20 00:35:33.075 iops : min= 512, max= 544, avg=524.80, stdev=14.94, samples=20 00:35:33.075 lat (msec) : 20=0.19%, 50=99.81% 00:35:33.075 cpu : usr=98.45%, sys=1.17%, ctx=15, majf=0, minf=37 00:35:33.075 IO depths : 1=4.4%, 2=10.6%, 4=24.9%, 8=51.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:35:33.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.076 filename1: (groupid=0, jobs=1): err= 0: pid=1450608: Tue Dec 10 06:00:19 2024 00:35:33.076 read: IOPS=528, BW=2113KiB/s (2164kB/s)(20.7MiB/10026msec) 00:35:33.076 slat (nsec): min=8093, max=52322, avg=20189.51, stdev=6147.11 00:35:33.076 clat (usec): min=12698, max=32178, avg=30115.08, stdev=1534.07 00:35:33.076 lat (usec): min=12719, max=32196, avg=30135.27, stdev=1533.61 00:35:33.076 clat percentiles (usec): 00:35:33.076 | 1.00th=[20579], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:33.076 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.076 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.076 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:35:33.076 | 99.99th=[32113] 00:35:33.076 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2112.40, stdev=77.36, samples=20 00:35:33.076 iops : min= 512, max= 576, avg=528.10, stdev=19.34, samples=20 00:35:33.076 lat (msec) : 20=0.91%, 50=99.09% 00:35:33.076 cpu : usr=98.46%, sys=1.16%, ctx=13, majf=0, minf=19 00:35:33.076 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.076 filename2: (groupid=0, jobs=1): err= 0: pid=1450609: Tue Dec 10 06:00:19 2024 00:35:33.076 read: IOPS=531, BW=2124KiB/s (2175kB/s)(20.8MiB/10026msec) 00:35:33.076 slat (nsec): min=7445, max=51228, avg=20072.78, stdev=6716.37 00:35:33.076 clat (usec): min=12902, max=40067, avg=29949.33, stdev=2055.65 00:35:33.076 lat (usec): min=12924, max=40099, avg=29969.40, stdev=2056.60 00:35:33.076 clat percentiles (usec): 00:35:33.076 | 1.00th=[17171], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:33.076 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.076 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.076 | 99.00th=[31589], 99.50th=[31851], 99.90th=[40109], 99.95th=[40109], 00:35:33.076 | 99.99th=[40109] 00:35:33.076 bw ( KiB/s): min= 2048, max= 2528, per=4.21%, avg=2123.60, stdev=114.06, samples=20 00:35:33.076 iops : min= 512, max= 632, avg=530.90, stdev=28.52, samples=20 00:35:33.076 lat (msec) : 20=1.84%, 50=98.16% 00:35:33.076 cpu : usr=98.42%, sys=1.20%, ctx=14, majf=0, minf=24 00:35:33.076 IO depths : 1=6.0%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:33.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 issued rwts: total=5324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.076 filename2: (groupid=0, jobs=1): err= 0: pid=1450610: Tue Dec 10 06:00:19 2024 00:35:33.076 read: IOPS=528, BW=2113KiB/s (2164kB/s)(20.7MiB/10026msec) 00:35:33.076 slat (nsec): min=7985, max=59128, avg=16736.11, stdev=6564.15 00:35:33.076 clat (usec): min=12808, max=32266, avg=30148.10, stdev=1534.22 00:35:33.076 lat (usec): min=12857, max=32288, avg=30164.84, stdev=1533.48 00:35:33.076 clat percentiles (usec): 00:35:33.076 | 1.00th=[20579], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:33.076 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.076 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.076 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:35:33.076 | 99.99th=[32375] 00:35:33.076 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2112.40, stdev=77.36, samples=20 00:35:33.076 iops : min= 512, max= 576, avg=528.10, stdev=19.34, samples=20 00:35:33.076 lat (msec) : 20=0.91%, 50=99.09% 00:35:33.076 cpu : usr=98.38%, sys=1.24%, ctx=14, majf=0, minf=24 00:35:33.076 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.076 filename2: (groupid=0, jobs=1): err= 0: pid=1450611: Tue Dec 10 06:00:19 2024 00:35:33.076 read: IOPS=524, BW=2100KiB/s (2150kB/s)(20.6MiB/10023msec) 00:35:33.076 slat (nsec): min=7565, max=44120, avg=18452.32, stdev=6063.12 00:35:33.076 clat (usec): min=20190, max=40334, avg=30333.81, stdev=769.98 00:35:33.076 lat (usec): min=20199, max=40348, avg=30352.27, stdev=769.89 00:35:33.076 clat percentiles (usec): 00:35:33.076 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:33.076 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.076 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:35:33.076 | 99.00th=[31851], 99.50th=[34341], 99.90th=[40109], 99.95th=[40109], 00:35:33.076 | 99.99th=[40109] 00:35:33.076 bw ( KiB/s): min= 2032, max= 2176, per=4.17%, avg=2099.20, stdev=56.77, samples=20 00:35:33.076 iops : min= 508, max= 544, avg=524.80, stdev=14.19, samples=20 00:35:33.076 lat (msec) : 50=100.00% 00:35:33.076 cpu : usr=98.20%, sys=1.40%, ctx=34, majf=0, minf=23 00:35:33.076 IO depths : 1=0.4%, 2=6.6%, 4=24.9%, 8=56.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:35:33.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 issued rwts: total=5262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.076 filename2: (groupid=0, jobs=1): err= 0: pid=1450612: Tue Dec 10 06:00:19 2024 00:35:33.076 read: IOPS=525, BW=2101KiB/s (2152kB/s)(20.6MiB/10020msec) 00:35:33.076 slat (nsec): min=6231, max=74235, avg=17632.98, stdev=8790.79 00:35:33.076 clat (usec): min=19622, max=47382, avg=30318.10, stdev=920.95 00:35:33.076 lat (usec): min=19630, max=47399, avg=30335.73, stdev=920.22 00:35:33.076 clat percentiles (usec): 00:35:33.076 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:35:33.076 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.076 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:33.076 | 99.00th=[31327], 99.50th=[31589], 99.90th=[41681], 99.95th=[41681], 00:35:33.076 | 99.99th=[47449] 00:35:33.076 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2095.95, stdev=62.13, samples=20 00:35:33.076 iops : min= 510, max= 544, avg=523.95, stdev=15.57, samples=20 00:35:33.076 lat (msec) : 20=0.30%, 50=99.70% 00:35:33.076 cpu : usr=98.46%, sys=1.15%, ctx=14, majf=0, minf=27 00:35:33.076 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.076 filename2: (groupid=0, jobs=1): err= 0: pid=1450613: Tue Dec 10 06:00:19 2024 00:35:33.076 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10007msec) 00:35:33.076 slat (nsec): min=7335, max=88271, avg=33116.49, stdev=15157.51 00:35:33.076 clat (usec): min=19189, max=58242, avg=30229.37, stdev=1675.78 00:35:33.076 lat (usec): min=19251, max=58256, avg=30262.49, stdev=1673.91 00:35:33.076 clat percentiles (usec): 00:35:33.076 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:33.076 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:33.076 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:35:33.076 | 99.00th=[31327], 99.50th=[31589], 99.90th=[58459], 99.95th=[58459], 00:35:33.076 | 99.99th=[58459] 00:35:33.076 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2092.35, stdev=75.46, samples=20 00:35:33.076 iops : min= 480, max= 544, avg=523.05, stdev=18.89, samples=20 00:35:33.076 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:35:33.076 cpu : usr=98.51%, sys=1.10%, ctx=16, majf=0, minf=25 00:35:33.076 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.076 filename2: (groupid=0, jobs=1): err= 0: pid=1450614: Tue Dec 10 06:00:19 2024 00:35:33.076 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10006msec) 00:35:33.076 slat (nsec): min=4380, max=86928, avg=33405.00, stdev=15809.61 00:35:33.076 clat (usec): min=19022, max=56299, avg=30187.14, stdev=1583.64 00:35:33.076 lat (usec): min=19037, max=56312, avg=30220.54, stdev=1582.12 00:35:33.076 clat percentiles (usec): 00:35:33.076 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:35:33.076 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:33.076 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:35:33.076 | 99.00th=[31327], 99.50th=[31589], 99.90th=[56361], 99.95th=[56361], 00:35:33.076 | 99.99th=[56361] 00:35:33.076 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2092.80, stdev=75.15, samples=20 00:35:33.076 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:33.076 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:35:33.076 cpu : usr=98.36%, sys=1.26%, ctx=15, majf=0, minf=20 00:35:33.076 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.076 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.076 filename2: (groupid=0, jobs=1): err= 0: pid=1450615: Tue Dec 10 06:00:19 2024 00:35:33.076 read: IOPS=528, BW=2113KiB/s (2164kB/s)(20.7MiB/10021msec) 00:35:33.076 slat (nsec): min=7534, max=89343, avg=21446.74, stdev=11950.49 00:35:33.076 clat (usec): min=12856, max=32471, avg=30120.78, stdev=1502.02 00:35:33.076 lat (usec): min=12904, max=32538, avg=30142.22, stdev=1499.65 00:35:33.076 clat percentiles (usec): 00:35:33.076 | 1.00th=[20579], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:35:33.076 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:33.076 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:33.076 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[32113], 00:35:33.076 | 99.99th=[32375] 00:35:33.076 bw ( KiB/s): min= 2048, max= 2288, per=4.19%, avg=2111.60, stdev=75.68, samples=20 00:35:33.077 iops : min= 512, max= 572, avg=527.90, stdev=18.92, samples=20 00:35:33.077 lat (msec) : 20=0.87%, 50=99.13% 00:35:33.077 cpu : usr=98.66%, sys=0.96%, ctx=13, majf=0, minf=18 00:35:33.077 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.077 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.077 issued rwts: total=5294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.077 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.077 filename2: (groupid=0, jobs=1): err= 0: pid=1450616: Tue Dec 10 06:00:19 2024 00:35:33.077 read: IOPS=524, BW=2098KiB/s (2149kB/s)(20.5MiB/10004msec) 00:35:33.077 slat (usec): min=8, max=101, avg=55.00, stdev=14.70 00:35:33.077 clat (usec): min=19042, max=56056, avg=30003.16, stdev=1579.75 00:35:33.077 lat (usec): min=19059, max=56097, avg=30058.16, stdev=1579.06 00:35:33.077 clat percentiles (usec): 00:35:33.077 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:35:33.077 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:35:33.077 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:35:33.077 | 99.00th=[31065], 99.50th=[31589], 99.90th=[55837], 99.95th=[55837], 00:35:33.077 | 99.99th=[55837] 00:35:33.077 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2092.95, stdev=74.79, samples=20 00:35:33.077 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:33.077 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:35:33.077 cpu : usr=98.64%, sys=0.96%, ctx=14, majf=0, minf=19 00:35:33.077 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.077 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.077 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.077 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.077 00:35:33.077 Run status group 0 (all jobs): 00:35:33.077 READ: bw=49.2MiB/s (51.6MB/s), 2095KiB/s-2124KiB/s (2145kB/s-2175kB/s), io=494MiB (518MB), run=10004-10045msec 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 bdev_null0 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 [2024-12-10 06:00:19.692744] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 bdev_null1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:33.077 { 00:35:33.077 "params": { 00:35:33.077 "name": "Nvme$subsystem", 00:35:33.077 "trtype": "$TEST_TRANSPORT", 00:35:33.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:33.077 "adrfam": "ipv4", 00:35:33.077 "trsvcid": "$NVMF_PORT", 00:35:33.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:33.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:33.077 "hdgst": ${hdgst:-false}, 00:35:33.077 "ddgst": ${ddgst:-false} 00:35:33.077 }, 00:35:33.077 "method": "bdev_nvme_attach_controller" 00:35:33.077 } 00:35:33.077 EOF 00:35:33.077 )") 00:35:33.077 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:33.078 { 00:35:33.078 "params": { 00:35:33.078 "name": "Nvme$subsystem", 00:35:33.078 "trtype": "$TEST_TRANSPORT", 00:35:33.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:33.078 "adrfam": "ipv4", 00:35:33.078 "trsvcid": "$NVMF_PORT", 00:35:33.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:33.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:33.078 "hdgst": ${hdgst:-false}, 00:35:33.078 "ddgst": ${ddgst:-false} 00:35:33.078 }, 00:35:33.078 "method": "bdev_nvme_attach_controller" 00:35:33.078 } 00:35:33.078 EOF 00:35:33.078 )") 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:33.078 "params": { 00:35:33.078 "name": "Nvme0", 00:35:33.078 "trtype": "tcp", 00:35:33.078 "traddr": "10.0.0.2", 00:35:33.078 "adrfam": "ipv4", 00:35:33.078 "trsvcid": "4420", 00:35:33.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:33.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:33.078 "hdgst": false, 00:35:33.078 "ddgst": false 00:35:33.078 }, 00:35:33.078 "method": "bdev_nvme_attach_controller" 00:35:33.078 },{ 00:35:33.078 "params": { 00:35:33.078 "name": "Nvme1", 00:35:33.078 "trtype": "tcp", 00:35:33.078 "traddr": "10.0.0.2", 00:35:33.078 "adrfam": "ipv4", 00:35:33.078 "trsvcid": "4420", 00:35:33.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:33.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:33.078 "hdgst": false, 00:35:33.078 "ddgst": false 00:35:33.078 }, 00:35:33.078 "method": "bdev_nvme_attach_controller" 00:35:33.078 }' 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:33.078 06:00:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.078 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:33.078 ... 00:35:33.078 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:33.078 ... 00:35:33.078 fio-3.35 00:35:33.078 Starting 4 threads 00:35:38.403 00:35:38.403 filename0: (groupid=0, jobs=1): err= 0: pid=1452518: Tue Dec 10 06:00:25 2024 00:35:38.403 read: IOPS=2778, BW=21.7MiB/s (22.8MB/s)(109MiB/5001msec) 00:35:38.403 slat (nsec): min=6081, max=44710, avg=8798.95, stdev=3006.51 00:35:38.403 clat (usec): min=655, max=5597, avg=2852.76, stdev=417.03 00:35:38.403 lat (usec): min=662, max=5603, avg=2861.56, stdev=416.88 00:35:38.403 clat percentiles (usec): 00:35:38.403 | 1.00th=[ 1745], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2540], 00:35:38.403 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 2966], 00:35:38.403 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3261], 95.00th=[ 3490], 00:35:38.403 | 99.00th=[ 4228], 99.50th=[ 4490], 99.90th=[ 5145], 99.95th=[ 5407], 00:35:38.403 | 99.99th=[ 5473] 00:35:38.403 bw ( KiB/s): min=21008, max=23488, per=26.07%, avg=22177.78, stdev=795.49, samples=9 00:35:38.403 iops : min= 2626, max= 2936, avg=2772.22, stdev=99.44, samples=9 00:35:38.403 lat (usec) : 750=0.02%, 1000=0.09% 00:35:38.403 lat (msec) : 2=2.13%, 4=96.39%, 10=1.37% 00:35:38.403 cpu : usr=95.50%, sys=4.20%, ctx=6, majf=0, minf=9 00:35:38.403 IO depths : 1=0.4%, 2=6.1%, 4=65.4%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:38.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.403 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.403 issued rwts: total=13893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.403 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:38.403 filename0: (groupid=0, jobs=1): err= 0: pid=1452519: Tue Dec 10 06:00:25 2024 00:35:38.403 read: IOPS=2564, BW=20.0MiB/s (21.0MB/s)(100MiB/5002msec) 00:35:38.403 slat (nsec): min=6116, max=57293, avg=8977.27, stdev=3150.90 00:35:38.403 clat (usec): min=665, max=5638, avg=3093.01, stdev=503.44 00:35:38.403 lat (usec): min=672, max=5647, avg=3101.99, stdev=503.24 00:35:38.403 clat percentiles (usec): 00:35:38.403 | 1.00th=[ 2073], 5.00th=[ 2442], 10.00th=[ 2638], 20.00th=[ 2802], 00:35:38.403 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:35:38.403 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3654], 95.00th=[ 4178], 00:35:38.403 | 99.00th=[ 5014], 99.50th=[ 5211], 99.90th=[ 5538], 99.95th=[ 5538], 00:35:38.403 | 99.99th=[ 5604] 00:35:38.403 bw ( KiB/s): min=18848, max=21440, per=24.20%, avg=20586.67, stdev=769.62, samples=9 00:35:38.403 iops : min= 2356, max= 2680, avg=2573.33, stdev=96.20, samples=9 00:35:38.403 lat (usec) : 750=0.02%, 1000=0.03% 00:35:38.403 lat (msec) : 2=0.72%, 4=93.09%, 10=6.14% 00:35:38.403 cpu : usr=95.72%, sys=3.96%, ctx=7, majf=0, minf=9 00:35:38.403 IO depths : 1=0.1%, 2=2.8%, 4=68.3%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:38.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.403 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.403 issued rwts: total=12829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.403 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:38.403 filename1: (groupid=0, jobs=1): err= 0: pid=1452520: Tue Dec 10 06:00:25 2024 00:35:38.403 read: IOPS=2577, BW=20.1MiB/s (21.1MB/s)(101MiB/5001msec) 00:35:38.403 slat (nsec): min=6115, max=46487, avg=8802.58, stdev=3094.46 00:35:38.403 clat (usec): min=602, max=5431, avg=3077.75, stdev=434.16 00:35:38.403 lat (usec): min=613, max=5443, avg=3086.55, stdev=433.98 00:35:38.403 clat percentiles (usec): 00:35:38.403 | 1.00th=[ 2114], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2802], 00:35:38.403 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:35:38.403 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3589], 95.00th=[ 3851], 00:35:38.403 | 99.00th=[ 4621], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5276], 00:35:38.403 | 99.99th=[ 5407] 00:35:38.403 bw ( KiB/s): min=19751, max=21424, per=24.24%, avg=20621.22, stdev=594.04, samples=9 00:35:38.403 iops : min= 2468, max= 2678, avg=2577.56, stdev=74.42, samples=9 00:35:38.403 lat (usec) : 750=0.03%, 1000=0.01% 00:35:38.403 lat (msec) : 2=0.72%, 4=95.24%, 10=4.00% 00:35:38.403 cpu : usr=96.08%, sys=3.62%, ctx=9, majf=0, minf=9 00:35:38.403 IO depths : 1=0.2%, 2=4.2%, 4=68.0%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:38.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.403 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.403 issued rwts: total=12888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.403 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:38.403 filename1: (groupid=0, jobs=1): err= 0: pid=1452521: Tue Dec 10 06:00:25 2024 00:35:38.403 read: IOPS=2715, BW=21.2MiB/s (22.2MB/s)(106MiB/5002msec) 00:35:38.403 slat (nsec): min=6105, max=58957, avg=8944.94, stdev=3089.76 00:35:38.403 clat (usec): min=896, max=5814, avg=2918.44, stdev=493.11 00:35:38.403 lat (usec): min=905, max=5820, avg=2927.39, stdev=492.88 00:35:38.403 clat percentiles (usec): 00:35:38.403 | 1.00th=[ 1860], 5.00th=[ 2212], 10.00th=[ 2409], 20.00th=[ 2573], 00:35:38.403 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 2966], 00:35:38.403 | 70.00th=[ 2999], 80.00th=[ 3130], 90.00th=[ 3458], 95.00th=[ 3884], 00:35:38.403 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5211], 99.95th=[ 5473], 00:35:38.403 | 99.99th=[ 5800] 00:35:38.403 bw ( KiB/s): min=19984, max=23632, per=25.49%, avg=21687.11, stdev=1149.33, samples=9 00:35:38.403 iops : min= 2498, max= 2954, avg=2710.89, stdev=143.67, samples=9 00:35:38.403 lat (usec) : 1000=0.01% 00:35:38.403 lat (msec) : 2=1.38%, 4=94.35%, 10=4.26% 00:35:38.403 cpu : usr=95.92%, sys=3.76%, ctx=6, majf=0, minf=9 00:35:38.403 IO depths : 1=0.3%, 2=6.6%, 4=65.2%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:38.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.403 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.403 issued rwts: total=13583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.403 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:38.403 00:35:38.403 Run status group 0 (all jobs): 00:35:38.403 READ: bw=83.1MiB/s (87.1MB/s), 20.0MiB/s-21.7MiB/s (21.0MB/s-22.8MB/s), io=416MiB (436MB), run=5001-5002msec 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.403 00:35:38.403 real 0m24.320s 00:35:38.403 user 4m51.216s 00:35:38.403 sys 0m5.467s 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.403 06:00:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:38.403 ************************************ 00:35:38.403 END TEST fio_dif_rand_params 00:35:38.403 ************************************ 00:35:38.403 06:00:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:38.403 06:00:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:38.403 06:00:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:38.403 06:00:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:38.403 ************************************ 00:35:38.403 START TEST fio_dif_digest 00:35:38.403 ************************************ 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:38.403 bdev_null0 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:38.403 [2024-12-10 06:00:26.083877] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:38.403 { 00:35:38.403 "params": { 00:35:38.403 "name": "Nvme$subsystem", 00:35:38.403 "trtype": "$TEST_TRANSPORT", 00:35:38.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:38.403 "adrfam": "ipv4", 00:35:38.403 "trsvcid": "$NVMF_PORT", 00:35:38.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:38.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:38.403 "hdgst": ${hdgst:-false}, 00:35:38.403 "ddgst": ${ddgst:-false} 00:35:38.403 }, 00:35:38.403 "method": "bdev_nvme_attach_controller" 00:35:38.403 } 00:35:38.403 EOF 00:35:38.403 )") 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:38.403 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:38.404 "params": { 00:35:38.404 "name": "Nvme0", 00:35:38.404 "trtype": "tcp", 00:35:38.404 "traddr": "10.0.0.2", 00:35:38.404 "adrfam": "ipv4", 00:35:38.404 "trsvcid": "4420", 00:35:38.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:38.404 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:38.404 "hdgst": true, 00:35:38.404 "ddgst": true 00:35:38.404 }, 00:35:38.404 "method": "bdev_nvme_attach_controller" 00:35:38.404 }' 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:38.404 06:00:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.662 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:38.662 ... 00:35:38.662 fio-3.35 00:35:38.662 Starting 3 threads 00:35:50.867 00:35:50.867 filename0: (groupid=0, jobs=1): err= 0: pid=1453566: Tue Dec 10 06:00:37 2024 00:35:50.867 read: IOPS=289, BW=36.1MiB/s (37.9MB/s)(362MiB/10006msec) 00:35:50.867 slat (nsec): min=6470, max=62062, avg=18758.45, stdev=7407.50 00:35:50.867 clat (usec): min=7928, max=13882, avg=10357.16, stdev=782.30 00:35:50.867 lat (usec): min=7940, max=13902, avg=10375.92, stdev=782.89 00:35:50.867 clat percentiles (usec): 00:35:50.867 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:35:50.867 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:35:50.867 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:35:50.867 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13173], 99.95th=[13173], 00:35:50.867 | 99.99th=[13829] 00:35:50.867 bw ( KiB/s): min=35840, max=39424, per=34.73%, avg=36992.00, stdev=880.96, samples=20 00:35:50.867 iops : min= 280, max= 308, avg=289.00, stdev= 6.88, samples=20 00:35:50.867 lat (msec) : 10=31.88%, 20=68.12% 00:35:50.867 cpu : usr=96.19%, sys=3.48%, ctx=33, majf=0, minf=61 00:35:50.867 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.867 issued rwts: total=2892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:50.867 filename0: (groupid=0, jobs=1): err= 0: pid=1453567: Tue Dec 10 06:00:37 2024 00:35:50.867 read: IOPS=276, BW=34.5MiB/s (36.2MB/s)(347MiB/10045msec) 00:35:50.867 slat (nsec): min=6386, max=56490, avg=16835.07, stdev=7161.34 00:35:50.867 clat (usec): min=8400, max=53931, avg=10824.84, stdev=1370.19 00:35:50.867 lat (usec): min=8426, max=53960, avg=10841.68, stdev=1370.09 00:35:50.867 clat percentiles (usec): 00:35:50.867 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:35:50.867 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:35:50.867 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11994], 95.00th=[12387], 00:35:50.867 | 99.00th=[13304], 99.50th=[13829], 99.90th=[14484], 99.95th=[46924], 00:35:50.867 | 99.99th=[53740] 00:35:50.867 bw ( KiB/s): min=33024, max=37632, per=33.32%, avg=35494.40, stdev=1118.27, samples=20 00:35:50.867 iops : min= 258, max= 294, avg=277.30, stdev= 8.74, samples=20 00:35:50.867 lat (msec) : 10=16.04%, 20=83.89%, 50=0.04%, 100=0.04% 00:35:50.867 cpu : usr=93.13%, sys=4.58%, ctx=657, majf=0, minf=52 00:35:50.867 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.867 issued rwts: total=2775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:50.867 filename0: (groupid=0, jobs=1): err= 0: pid=1453568: Tue Dec 10 06:00:37 2024 00:35:50.867 read: IOPS=268, BW=33.5MiB/s (35.1MB/s)(337MiB/10046msec) 00:35:50.867 slat (nsec): min=6678, max=56102, avg=16006.95, stdev=6809.37 00:35:50.867 clat (usec): min=6528, max=47669, avg=11157.67, stdev=1313.18 00:35:50.867 lat (usec): min=6550, max=47678, avg=11173.68, stdev=1313.04 00:35:50.867 clat percentiles (usec): 00:35:50.867 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10421], 00:35:50.867 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:35:50.867 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12256], 95.00th=[12780], 00:35:50.867 | 99.00th=[13566], 99.50th=[13960], 99.90th=[14746], 99.95th=[45351], 00:35:50.867 | 99.99th=[47449] 00:35:50.867 bw ( KiB/s): min=33280, max=36608, per=32.34%, avg=34444.80, stdev=836.68, samples=20 00:35:50.867 iops : min= 260, max= 286, avg=269.10, stdev= 6.54, samples=20 00:35:50.867 lat (msec) : 10=7.43%, 20=92.50%, 50=0.07% 00:35:50.867 cpu : usr=96.87%, sys=2.78%, ctx=64, majf=0, minf=77 00:35:50.867 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.867 issued rwts: total=2693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:50.867 00:35:50.867 Run status group 0 (all jobs): 00:35:50.867 READ: bw=104MiB/s (109MB/s), 33.5MiB/s-36.1MiB/s (35.1MB/s-37.9MB/s), io=1045MiB (1096MB), run=10006-10046msec 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.867 00:35:50.867 real 0m11.276s 00:35:50.867 user 0m35.117s 00:35:50.867 sys 0m1.393s 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:50.867 06:00:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:50.867 ************************************ 00:35:50.867 END TEST fio_dif_digest 00:35:50.867 ************************************ 00:35:50.867 06:00:37 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:50.867 06:00:37 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:50.867 06:00:37 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:50.867 06:00:37 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:50.867 06:00:37 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:50.867 06:00:37 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:50.867 06:00:37 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:50.868 06:00:37 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:50.868 rmmod nvme_tcp 00:35:50.868 rmmod nvme_fabrics 00:35:50.868 rmmod nvme_keyring 00:35:50.868 06:00:37 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:50.868 06:00:37 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:50.868 06:00:37 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:50.868 06:00:37 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1444645 ']' 00:35:50.868 06:00:37 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1444645 00:35:50.868 06:00:37 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1444645 ']' 00:35:50.868 06:00:37 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1444645 00:35:50.868 06:00:37 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:50.868 06:00:37 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.868 06:00:37 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1444645 00:35:50.868 06:00:37 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:50.868 06:00:37 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:50.868 06:00:37 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1444645' 00:35:50.868 killing process with pid 1444645 00:35:50.868 06:00:37 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1444645 00:35:50.868 06:00:37 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1444645 00:35:50.868 06:00:37 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:50.868 06:00:37 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:52.774 Waiting for block devices as requested 00:35:52.774 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:52.774 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:52.774 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:52.774 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:53.032 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:53.032 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:53.032 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:53.291 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:53.291 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:53.291 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:53.291 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:53.549 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:53.549 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:53.549 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:53.808 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:53.808 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:53.808 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:54.067 06:00:41 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:54.067 06:00:41 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:54.067 06:00:41 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:54.067 06:00:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:54.067 06:00:41 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:54.067 06:00:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:54.067 06:00:41 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:54.067 06:00:41 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:54.067 06:00:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.067 06:00:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:54.067 06:00:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.973 06:00:43 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:55.973 00:35:55.973 real 1m14.268s 00:35:55.973 user 7m9.283s 00:35:55.973 sys 0m20.303s 00:35:55.973 06:00:43 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.973 06:00:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:55.973 ************************************ 00:35:55.973 END TEST nvmf_dif 00:35:55.973 ************************************ 00:35:55.973 06:00:43 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:55.973 06:00:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:55.973 06:00:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:55.973 06:00:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.973 ************************************ 00:35:55.973 START TEST nvmf_abort_qd_sizes 00:35:55.973 ************************************ 00:35:55.973 06:00:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:56.232 * Looking for test storage... 00:35:56.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:56.232 06:00:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:56.232 06:00:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:35:56.232 06:00:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:56.232 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:56.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.233 --rc genhtml_branch_coverage=1 00:35:56.233 --rc genhtml_function_coverage=1 00:35:56.233 --rc genhtml_legend=1 00:35:56.233 --rc geninfo_all_blocks=1 00:35:56.233 --rc geninfo_unexecuted_blocks=1 00:35:56.233 00:35:56.233 ' 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:56.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.233 --rc genhtml_branch_coverage=1 00:35:56.233 --rc genhtml_function_coverage=1 00:35:56.233 --rc genhtml_legend=1 00:35:56.233 --rc geninfo_all_blocks=1 00:35:56.233 --rc geninfo_unexecuted_blocks=1 00:35:56.233 00:35:56.233 ' 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:56.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.233 --rc genhtml_branch_coverage=1 00:35:56.233 --rc genhtml_function_coverage=1 00:35:56.233 --rc genhtml_legend=1 00:35:56.233 --rc geninfo_all_blocks=1 00:35:56.233 --rc geninfo_unexecuted_blocks=1 00:35:56.233 00:35:56.233 ' 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:56.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.233 --rc genhtml_branch_coverage=1 00:35:56.233 --rc genhtml_function_coverage=1 00:35:56.233 --rc genhtml_legend=1 00:35:56.233 --rc geninfo_all_blocks=1 00:35:56.233 --rc geninfo_unexecuted_blocks=1 00:35:56.233 00:35:56.233 ' 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:56.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:56.233 06:00:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:02.803 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:02.803 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:02.803 Found net devices under 0000:af:00.0: cvl_0_0 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:02.803 Found net devices under 0000:af:00.1: cvl_0_1 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:02.803 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:02.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:02.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:36:02.804 00:36:02.804 --- 10.0.0.2 ping statistics --- 00:36:02.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.804 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:02.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:02.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:36:02.804 00:36:02.804 --- 10.0.0.1 ping statistics --- 00:36:02.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.804 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:36:02.804 06:00:49 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:02.804 06:00:50 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:05.341 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:05.341 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:05.909 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:06.167 06:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:06.167 06:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1461424 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1461424 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1461424 ']' 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:06.168 06:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:06.168 [2024-12-10 06:00:53.925352] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:36:06.168 [2024-12-10 06:00:53.925399] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.168 [2024-12-10 06:00:54.005010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:06.168 [2024-12-10 06:00:54.045862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.168 [2024-12-10 06:00:54.045903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.168 [2024-12-10 06:00:54.045910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.168 [2024-12-10 06:00:54.045915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.168 [2024-12-10 06:00:54.045920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.168 [2024-12-10 06:00:54.047403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.168 [2024-12-10 06:00:54.047512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:06.168 [2024-12-10 06:00:54.047621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.168 [2024-12-10 06:00:54.047622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.426 06:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:06.426 ************************************ 00:36:06.426 START TEST spdk_target_abort 00:36:06.426 ************************************ 00:36:06.426 06:00:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:36:06.426 06:00:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:06.426 06:00:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:36:06.426 06:00:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.426 06:00:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.708 spdk_targetn1 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.708 [2024-12-10 06:00:57.070120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.708 [2024-12-10 06:00:57.110405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:09.708 06:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:12.983 Initializing NVMe Controllers 00:36:12.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:12.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:12.983 Initialization complete. Launching workers. 00:36:12.983 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15597, failed: 0 00:36:12.983 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1328, failed to submit 14269 00:36:12.983 success 737, unsuccessful 591, failed 0 00:36:12.983 06:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:12.983 06:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:16.261 Initializing NVMe Controllers 00:36:16.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:16.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:16.261 Initialization complete. Launching workers. 00:36:16.261 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8810, failed: 0 00:36:16.261 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1257, failed to submit 7553 00:36:16.261 success 330, unsuccessful 927, failed 0 00:36:16.261 06:01:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:16.261 06:01:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:19.541 Initializing NVMe Controllers 00:36:19.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:19.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:19.541 Initialization complete. Launching workers. 00:36:19.541 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39034, failed: 0 00:36:19.541 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2685, failed to submit 36349 00:36:19.541 success 597, unsuccessful 2088, failed 0 00:36:19.541 06:01:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:19.541 06:01:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.541 06:01:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.541 06:01:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.541 06:01:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:19.541 06:01:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.541 06:01:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1461424 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1461424 ']' 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1461424 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1461424 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1461424' 00:36:20.474 killing process with pid 1461424 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1461424 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1461424 00:36:20.474 00:36:20.474 real 0m14.103s 00:36:20.474 user 0m53.655s 00:36:20.474 sys 0m2.738s 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.474 06:01:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.474 ************************************ 00:36:20.474 END TEST spdk_target_abort 00:36:20.474 ************************************ 00:36:20.733 06:01:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:20.733 06:01:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:20.733 06:01:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:20.733 06:01:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:20.733 ************************************ 00:36:20.733 START TEST kernel_target_abort 00:36:20.733 ************************************ 00:36:20.733 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:36:20.733 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:20.733 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:20.733 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.733 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.733 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:20.734 06:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:23.271 Waiting for block devices as requested 00:36:23.271 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:23.529 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:23.529 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:23.529 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:23.788 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:23.788 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:23.788 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:24.047 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:24.047 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:24.047 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:24.047 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:24.306 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:24.306 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:24.306 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:24.565 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:24.565 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:24.565 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:24.850 No valid GPT data, bailing 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:24.850 00:36:24.850 Discovery Log Number of Records 2, Generation counter 2 00:36:24.850 =====Discovery Log Entry 0====== 00:36:24.850 trtype: tcp 00:36:24.850 adrfam: ipv4 00:36:24.850 subtype: current discovery subsystem 00:36:24.850 treq: not specified, sq flow control disable supported 00:36:24.850 portid: 1 00:36:24.850 trsvcid: 4420 00:36:24.850 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:24.850 traddr: 10.0.0.1 00:36:24.850 eflags: none 00:36:24.850 sectype: none 00:36:24.850 =====Discovery Log Entry 1====== 00:36:24.850 trtype: tcp 00:36:24.850 adrfam: ipv4 00:36:24.850 subtype: nvme subsystem 00:36:24.850 treq: not specified, sq flow control disable supported 00:36:24.850 portid: 1 00:36:24.850 trsvcid: 4420 00:36:24.850 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:24.850 traddr: 10.0.0.1 00:36:24.850 eflags: none 00:36:24.850 sectype: none 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:24.850 06:01:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:28.162 Initializing NVMe Controllers 00:36:28.162 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:28.162 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:28.162 Initialization complete. Launching workers. 00:36:28.162 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80906, failed: 0 00:36:28.162 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 80906, failed to submit 0 00:36:28.162 success 0, unsuccessful 80906, failed 0 00:36:28.162 06:01:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:28.162 06:01:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:31.447 Initializing NVMe Controllers 00:36:31.447 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:31.447 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:31.447 Initialization complete. Launching workers. 00:36:31.447 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 147197, failed: 0 00:36:31.447 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28434, failed to submit 118763 00:36:31.447 success 0, unsuccessful 28434, failed 0 00:36:31.447 06:01:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:31.447 06:01:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:34.732 Initializing NVMe Controllers 00:36:34.732 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:34.732 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:34.732 Initialization complete. Launching workers. 00:36:34.732 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 130976, failed: 0 00:36:34.732 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32766, failed to submit 98210 00:36:34.732 success 0, unsuccessful 32766, failed 0 00:36:34.732 06:01:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:34.732 06:01:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:34.732 06:01:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:34.732 06:01:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:34.732 06:01:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:34.732 06:01:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:34.732 06:01:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:34.732 06:01:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:34.732 06:01:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:34.732 06:01:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:37.265 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:37.265 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:38.202 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:38.202 00:36:38.202 real 0m17.475s 00:36:38.202 user 0m8.587s 00:36:38.202 sys 0m5.232s 00:36:38.202 06:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:38.202 06:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.202 ************************************ 00:36:38.202 END TEST kernel_target_abort 00:36:38.202 ************************************ 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:38.202 rmmod nvme_tcp 00:36:38.202 rmmod nvme_fabrics 00:36:38.202 rmmod nvme_keyring 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1461424 ']' 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1461424 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1461424 ']' 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1461424 00:36:38.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1461424) - No such process 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1461424 is not found' 00:36:38.202 Process with pid 1461424 is not found 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:38.202 06:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:40.737 Waiting for block devices as requested 00:36:41.004 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:41.004 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:41.264 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:41.264 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:41.264 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:41.264 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:41.523 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:41.523 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:41.523 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:41.782 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:41.782 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:41.782 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:42.041 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:42.041 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:42.041 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:42.041 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:42.300 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:42.300 06:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:42.300 06:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:42.300 06:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:42.300 06:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:42.300 06:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:42.300 06:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:42.300 06:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:42.300 06:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:42.300 06:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.300 06:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:42.300 06:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.835 06:01:32 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:44.835 00:36:44.835 real 0m48.274s 00:36:44.835 user 1m6.560s 00:36:44.835 sys 0m16.690s 00:36:44.835 06:01:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.835 06:01:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:44.835 ************************************ 00:36:44.835 END TEST nvmf_abort_qd_sizes 00:36:44.835 ************************************ 00:36:44.835 06:01:32 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:44.835 06:01:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:44.835 06:01:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:44.835 06:01:32 -- common/autotest_common.sh@10 -- # set +x 00:36:44.835 ************************************ 00:36:44.835 START TEST keyring_file 00:36:44.835 ************************************ 00:36:44.835 06:01:32 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:44.835 * Looking for test storage... 00:36:44.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:44.835 06:01:32 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:44.835 06:01:32 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:36:44.835 06:01:32 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:44.835 06:01:32 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:44.835 06:01:32 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:44.835 06:01:32 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.835 --rc genhtml_branch_coverage=1 00:36:44.835 --rc genhtml_function_coverage=1 00:36:44.835 --rc genhtml_legend=1 00:36:44.835 --rc geninfo_all_blocks=1 00:36:44.835 --rc geninfo_unexecuted_blocks=1 00:36:44.835 00:36:44.835 ' 00:36:44.835 06:01:32 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.835 --rc genhtml_branch_coverage=1 00:36:44.835 --rc genhtml_function_coverage=1 00:36:44.835 --rc genhtml_legend=1 00:36:44.835 --rc geninfo_all_blocks=1 00:36:44.835 --rc geninfo_unexecuted_blocks=1 00:36:44.835 00:36:44.835 ' 00:36:44.835 06:01:32 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.835 --rc genhtml_branch_coverage=1 00:36:44.835 --rc genhtml_function_coverage=1 00:36:44.835 --rc genhtml_legend=1 00:36:44.835 --rc geninfo_all_blocks=1 00:36:44.835 --rc geninfo_unexecuted_blocks=1 00:36:44.835 00:36:44.835 ' 00:36:44.835 06:01:32 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:44.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.835 --rc genhtml_branch_coverage=1 00:36:44.835 --rc genhtml_function_coverage=1 00:36:44.835 --rc genhtml_legend=1 00:36:44.835 --rc geninfo_all_blocks=1 00:36:44.835 --rc geninfo_unexecuted_blocks=1 00:36:44.835 00:36:44.835 ' 00:36:44.835 06:01:32 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:44.835 06:01:32 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:44.835 06:01:32 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:44.835 06:01:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.835 06:01:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.835 06:01:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.835 06:01:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:44.835 06:01:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:44.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:44.835 06:01:32 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DQ8yVGWIKN 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DQ8yVGWIKN 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DQ8yVGWIKN 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DQ8yVGWIKN 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yU2XFQgFpy 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:44.836 06:01:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yU2XFQgFpy 00:36:44.836 06:01:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yU2XFQgFpy 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yU2XFQgFpy 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=1470008 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:44.836 06:01:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1470008 00:36:44.836 06:01:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1470008 ']' 00:36:44.836 06:01:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.836 06:01:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:44.836 06:01:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.836 06:01:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:44.836 06:01:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:44.836 [2024-12-10 06:01:32.563029] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:36:44.836 [2024-12-10 06:01:32.563078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470008 ] 00:36:44.836 [2024-12-10 06:01:32.618042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.836 [2024-12-10 06:01:32.658664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:45.095 06:01:32 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:45.095 [2024-12-10 06:01:32.885227] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:45.095 null0 00:36:45.095 [2024-12-10 06:01:32.917266] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:45.095 [2024-12-10 06:01:32.917532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.095 06:01:32 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:45.095 [2024-12-10 06:01:32.949342] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:45.095 request: 00:36:45.095 { 00:36:45.095 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:45.095 "secure_channel": false, 00:36:45.095 "listen_address": { 00:36:45.095 "trtype": "tcp", 00:36:45.095 "traddr": "127.0.0.1", 00:36:45.095 "trsvcid": "4420" 00:36:45.095 }, 00:36:45.095 "method": "nvmf_subsystem_add_listener", 00:36:45.095 "req_id": 1 00:36:45.095 } 00:36:45.095 Got JSON-RPC error response 00:36:45.095 response: 00:36:45.095 { 00:36:45.095 "code": -32602, 00:36:45.095 "message": "Invalid parameters" 00:36:45.095 } 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:45.095 06:01:32 keyring_file -- keyring/file.sh@47 -- # bperfpid=1470013 00:36:45.095 06:01:32 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1470013 /var/tmp/bperf.sock 00:36:45.095 06:01:32 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1470013 ']' 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:45.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:45.095 06:01:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:45.354 [2024-12-10 06:01:33.001005] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:36:45.354 [2024-12-10 06:01:33.001046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470013 ] 00:36:45.354 [2024-12-10 06:01:33.073985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.354 [2024-12-10 06:01:33.113287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:45.354 06:01:33 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:45.354 06:01:33 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:45.354 06:01:33 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DQ8yVGWIKN 00:36:45.354 06:01:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DQ8yVGWIKN 00:36:45.612 06:01:33 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yU2XFQgFpy 00:36:45.612 06:01:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yU2XFQgFpy 00:36:45.871 06:01:33 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:45.871 06:01:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:45.871 06:01:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.871 06:01:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.871 06:01:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.129 06:01:33 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.DQ8yVGWIKN == \/\t\m\p\/\t\m\p\.\D\Q\8\y\V\G\W\I\K\N ]] 00:36:46.129 06:01:33 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:46.130 06:01:33 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:46.130 06:01:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.130 06:01:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:46.130 06:01:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.130 06:01:33 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.yU2XFQgFpy == \/\t\m\p\/\t\m\p\.\y\U\2\X\F\Q\g\F\p\y ]] 00:36:46.130 06:01:33 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:46.130 06:01:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:46.130 06:01:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.130 06:01:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.130 06:01:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.130 06:01:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.387 06:01:34 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:46.387 06:01:34 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:46.387 06:01:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:46.387 06:01:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.387 06:01:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.387 06:01:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.387 06:01:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:46.646 06:01:34 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:46.646 06:01:34 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.646 06:01:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.905 [2024-12-10 06:01:34.564108] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:46.905 nvme0n1 00:36:46.905 06:01:34 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:46.905 06:01:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:46.905 06:01:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.905 06:01:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.905 06:01:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.905 06:01:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.163 06:01:34 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:47.163 06:01:34 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:47.163 06:01:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:47.163 06:01:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:47.163 06:01:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.163 06:01:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:47.163 06:01:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.163 06:01:35 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:47.163 06:01:35 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:47.422 Running I/O for 1 seconds... 00:36:48.361 19498.00 IOPS, 76.16 MiB/s 00:36:48.361 Latency(us) 00:36:48.361 [2024-12-10T05:01:36.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.361 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:48.361 nvme0n1 : 1.00 19543.71 76.34 0.00 0.00 6537.48 2683.86 11671.65 00:36:48.361 [2024-12-10T05:01:36.257Z] =================================================================================================================== 00:36:48.361 [2024-12-10T05:01:36.257Z] Total : 19543.71 76.34 0.00 0.00 6537.48 2683.86 11671.65 00:36:48.361 { 00:36:48.361 "results": [ 00:36:48.361 { 00:36:48.361 "job": "nvme0n1", 00:36:48.361 "core_mask": "0x2", 00:36:48.361 "workload": "randrw", 00:36:48.361 "percentage": 50, 00:36:48.361 "status": "finished", 00:36:48.361 "queue_depth": 128, 00:36:48.361 "io_size": 4096, 00:36:48.361 "runtime": 1.004313, 00:36:48.361 "iops": 19543.70798745013, 00:36:48.361 "mibps": 76.34260932597707, 00:36:48.361 "io_failed": 0, 00:36:48.361 "io_timeout": 0, 00:36:48.361 "avg_latency_us": 6537.481045348239, 00:36:48.361 "min_latency_us": 2683.8552380952383, 00:36:48.361 "max_latency_us": 11671.649523809523 00:36:48.361 } 00:36:48.361 ], 00:36:48.361 "core_count": 1 00:36:48.362 } 00:36:48.362 06:01:36 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:48.362 06:01:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:48.621 06:01:36 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:48.621 06:01:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:48.621 06:01:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.621 06:01:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.621 06:01:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:48.621 06:01:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.879 06:01:36 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:48.879 06:01:36 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:48.879 06:01:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:48.879 06:01:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.879 06:01:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.879 06:01:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:48.879 06:01:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.879 06:01:36 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:48.879 06:01:36 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:48.879 06:01:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:48.879 06:01:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:48.879 06:01:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:48.879 06:01:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.879 06:01:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:48.879 06:01:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.879 06:01:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:48.879 06:01:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:49.137 [2024-12-10 06:01:36.908931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:49.137 [2024-12-10 06:01:36.909324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1020470 (107): Transport endpoint is not connected 00:36:49.137 [2024-12-10 06:01:36.910319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1020470 (9): Bad file descriptor 00:36:49.137 [2024-12-10 06:01:36.911321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:49.137 [2024-12-10 06:01:36.911332] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:49.137 [2024-12-10 06:01:36.911340] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:49.137 [2024-12-10 06:01:36.911348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:49.137 request: 00:36:49.137 { 00:36:49.137 "name": "nvme0", 00:36:49.137 "trtype": "tcp", 00:36:49.137 "traddr": "127.0.0.1", 00:36:49.137 "adrfam": "ipv4", 00:36:49.137 "trsvcid": "4420", 00:36:49.137 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:49.137 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:49.137 "prchk_reftag": false, 00:36:49.137 "prchk_guard": false, 00:36:49.137 "hdgst": false, 00:36:49.137 "ddgst": false, 00:36:49.137 "psk": "key1", 00:36:49.137 "allow_unrecognized_csi": false, 00:36:49.137 "method": "bdev_nvme_attach_controller", 00:36:49.137 "req_id": 1 00:36:49.137 } 00:36:49.137 Got JSON-RPC error response 00:36:49.137 response: 00:36:49.137 { 00:36:49.137 "code": -5, 00:36:49.137 "message": "Input/output error" 00:36:49.137 } 00:36:49.137 06:01:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:49.137 06:01:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:49.137 06:01:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:49.137 06:01:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:49.137 06:01:36 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:49.137 06:01:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.137 06:01:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.137 06:01:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.137 06:01:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.137 06:01:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.395 06:01:37 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:49.395 06:01:37 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:49.395 06:01:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:49.395 06:01:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.395 06:01:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.395 06:01:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:49.395 06:01:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.654 06:01:37 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:49.654 06:01:37 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:49.654 06:01:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:49.654 06:01:37 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:49.654 06:01:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:49.912 06:01:37 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:49.912 06:01:37 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:49.912 06:01:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.170 06:01:37 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:50.170 06:01:37 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.DQ8yVGWIKN 00:36:50.170 06:01:37 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DQ8yVGWIKN 00:36:50.170 06:01:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:50.170 06:01:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DQ8yVGWIKN 00:36:50.170 06:01:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:50.170 06:01:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:50.170 06:01:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:50.170 06:01:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:50.170 06:01:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DQ8yVGWIKN 00:36:50.170 06:01:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DQ8yVGWIKN 00:36:50.170 [2024-12-10 06:01:38.049398] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DQ8yVGWIKN': 0100660 00:36:50.170 [2024-12-10 06:01:38.049420] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:50.170 request: 00:36:50.170 { 00:36:50.170 "name": "key0", 00:36:50.171 "path": "/tmp/tmp.DQ8yVGWIKN", 00:36:50.171 "method": "keyring_file_add_key", 00:36:50.171 "req_id": 1 00:36:50.171 } 00:36:50.171 Got JSON-RPC error response 00:36:50.171 response: 00:36:50.171 { 00:36:50.171 "code": -1, 00:36:50.171 "message": "Operation not permitted" 00:36:50.171 } 00:36:50.428 06:01:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:50.428 06:01:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:50.428 06:01:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:50.428 06:01:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:50.428 06:01:38 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.DQ8yVGWIKN 00:36:50.428 06:01:38 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DQ8yVGWIKN 00:36:50.428 06:01:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DQ8yVGWIKN 00:36:50.428 06:01:38 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.DQ8yVGWIKN 00:36:50.428 06:01:38 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:50.428 06:01:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:50.428 06:01:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.428 06:01:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.428 06:01:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.428 06:01:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.686 06:01:38 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:50.686 06:01:38 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.686 06:01:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:50.686 06:01:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.686 06:01:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:50.686 06:01:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:50.686 06:01:38 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:50.686 06:01:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:50.686 06:01:38 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.686 06:01:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.944 [2024-12-10 06:01:38.667033] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DQ8yVGWIKN': No such file or directory 00:36:50.944 [2024-12-10 06:01:38.667052] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:50.944 [2024-12-10 06:01:38.667067] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:50.944 [2024-12-10 06:01:38.667074] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:50.944 [2024-12-10 06:01:38.667081] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:50.944 [2024-12-10 06:01:38.667087] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:50.944 request: 00:36:50.944 { 00:36:50.944 "name": "nvme0", 00:36:50.944 "trtype": "tcp", 00:36:50.944 "traddr": "127.0.0.1", 00:36:50.944 "adrfam": "ipv4", 00:36:50.944 "trsvcid": "4420", 00:36:50.944 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:50.944 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:50.944 "prchk_reftag": false, 00:36:50.944 "prchk_guard": false, 00:36:50.944 "hdgst": false, 00:36:50.944 "ddgst": false, 00:36:50.944 "psk": "key0", 00:36:50.944 "allow_unrecognized_csi": false, 00:36:50.944 "method": "bdev_nvme_attach_controller", 00:36:50.944 "req_id": 1 00:36:50.944 } 00:36:50.944 Got JSON-RPC error response 00:36:50.944 response: 00:36:50.944 { 00:36:50.944 "code": -19, 00:36:50.944 "message": "No such device" 00:36:50.944 } 00:36:50.944 06:01:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:50.944 06:01:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:50.944 06:01:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:50.944 06:01:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:50.944 06:01:38 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:50.944 06:01:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:51.201 06:01:38 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:51.201 06:01:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:51.201 06:01:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:51.201 06:01:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:51.201 06:01:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:51.201 06:01:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:51.201 06:01:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tdeyNjyeCf 00:36:51.201 06:01:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:51.201 06:01:38 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:51.201 06:01:38 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:51.201 06:01:38 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:51.201 06:01:38 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:51.201 06:01:38 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:51.201 06:01:38 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:51.201 06:01:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tdeyNjyeCf 00:36:51.201 06:01:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tdeyNjyeCf 00:36:51.201 06:01:38 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.tdeyNjyeCf 00:36:51.201 06:01:38 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tdeyNjyeCf 00:36:51.201 06:01:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tdeyNjyeCf 00:36:51.458 06:01:39 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.458 06:01:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.717 nvme0n1 00:36:51.717 06:01:39 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:51.717 06:01:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:51.717 06:01:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:51.717 06:01:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.717 06:01:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:51.717 06:01:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.974 06:01:39 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:51.974 06:01:39 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:51.974 06:01:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:51.974 06:01:39 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:51.974 06:01:39 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:51.974 06:01:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.974 06:01:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:51.974 06:01:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.231 06:01:40 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:52.231 06:01:40 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:52.231 06:01:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:52.231 06:01:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.231 06:01:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.231 06:01:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.231 06:01:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.489 06:01:40 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:52.489 06:01:40 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:52.489 06:01:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:52.747 06:01:40 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:52.747 06:01:40 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:52.747 06:01:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.747 06:01:40 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:52.747 06:01:40 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tdeyNjyeCf 00:36:52.747 06:01:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tdeyNjyeCf 00:36:53.004 06:01:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yU2XFQgFpy 00:36:53.004 06:01:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yU2XFQgFpy 00:36:53.259 06:01:41 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:53.259 06:01:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:53.517 nvme0n1 00:36:53.517 06:01:41 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:53.517 06:01:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:53.775 06:01:41 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:53.775 "subsystems": [ 00:36:53.775 { 00:36:53.775 "subsystem": "keyring", 00:36:53.775 "config": [ 00:36:53.775 { 00:36:53.775 "method": "keyring_file_add_key", 00:36:53.775 "params": { 00:36:53.775 "name": "key0", 00:36:53.775 "path": "/tmp/tmp.tdeyNjyeCf" 00:36:53.775 } 00:36:53.775 }, 00:36:53.775 { 00:36:53.775 "method": "keyring_file_add_key", 00:36:53.775 "params": { 00:36:53.775 "name": "key1", 00:36:53.775 "path": "/tmp/tmp.yU2XFQgFpy" 00:36:53.775 } 00:36:53.775 } 00:36:53.775 ] 00:36:53.775 }, 00:36:53.775 { 00:36:53.775 "subsystem": "iobuf", 00:36:53.775 "config": [ 00:36:53.775 { 00:36:53.775 "method": "iobuf_set_options", 00:36:53.775 "params": { 00:36:53.775 "small_pool_count": 8192, 00:36:53.775 "large_pool_count": 1024, 00:36:53.775 "small_bufsize": 8192, 00:36:53.775 "large_bufsize": 135168, 00:36:53.775 "enable_numa": false 00:36:53.775 } 00:36:53.775 } 00:36:53.775 ] 00:36:53.775 }, 00:36:53.775 { 00:36:53.775 "subsystem": "sock", 00:36:53.775 "config": [ 00:36:53.775 { 00:36:53.775 "method": "sock_set_default_impl", 00:36:53.775 "params": { 00:36:53.775 "impl_name": "posix" 00:36:53.775 } 00:36:53.775 }, 00:36:53.775 { 00:36:53.775 "method": "sock_impl_set_options", 00:36:53.775 "params": { 00:36:53.775 "impl_name": "ssl", 00:36:53.775 "recv_buf_size": 4096, 00:36:53.775 "send_buf_size": 4096, 00:36:53.775 "enable_recv_pipe": true, 00:36:53.775 "enable_quickack": false, 00:36:53.775 "enable_placement_id": 0, 00:36:53.775 "enable_zerocopy_send_server": true, 00:36:53.775 "enable_zerocopy_send_client": false, 00:36:53.775 "zerocopy_threshold": 0, 00:36:53.775 "tls_version": 0, 00:36:53.775 "enable_ktls": false 00:36:53.775 } 00:36:53.775 }, 00:36:53.775 { 00:36:53.775 "method": "sock_impl_set_options", 00:36:53.775 "params": { 00:36:53.775 "impl_name": "posix", 00:36:53.775 "recv_buf_size": 2097152, 00:36:53.775 "send_buf_size": 2097152, 00:36:53.775 "enable_recv_pipe": true, 00:36:53.775 "enable_quickack": false, 00:36:53.775 "enable_placement_id": 0, 00:36:53.775 "enable_zerocopy_send_server": true, 00:36:53.775 "enable_zerocopy_send_client": false, 00:36:53.775 "zerocopy_threshold": 0, 00:36:53.775 "tls_version": 0, 00:36:53.775 "enable_ktls": false 00:36:53.775 } 00:36:53.775 } 00:36:53.775 ] 00:36:53.775 }, 00:36:53.775 { 00:36:53.775 "subsystem": "vmd", 00:36:53.775 "config": [] 00:36:53.775 }, 00:36:53.775 { 00:36:53.775 "subsystem": "accel", 00:36:53.775 "config": [ 00:36:53.775 { 00:36:53.775 "method": "accel_set_options", 00:36:53.775 "params": { 00:36:53.775 "small_cache_size": 128, 00:36:53.775 "large_cache_size": 16, 00:36:53.775 "task_count": 2048, 00:36:53.775 "sequence_count": 2048, 00:36:53.775 "buf_count": 2048 00:36:53.775 } 00:36:53.775 } 00:36:53.775 ] 00:36:53.775 }, 00:36:53.775 { 00:36:53.775 "subsystem": "bdev", 00:36:53.775 "config": [ 00:36:53.775 { 00:36:53.775 "method": "bdev_set_options", 00:36:53.775 "params": { 00:36:53.775 "bdev_io_pool_size": 65535, 00:36:53.775 "bdev_io_cache_size": 256, 00:36:53.775 "bdev_auto_examine": true, 00:36:53.775 "iobuf_small_cache_size": 128, 00:36:53.775 "iobuf_large_cache_size": 16 00:36:53.775 } 00:36:53.775 }, 00:36:53.775 { 00:36:53.775 "method": "bdev_raid_set_options", 00:36:53.775 "params": { 00:36:53.775 "process_window_size_kb": 1024, 00:36:53.775 "process_max_bandwidth_mb_sec": 0 00:36:53.775 } 00:36:53.775 }, 00:36:53.775 { 00:36:53.775 "method": "bdev_iscsi_set_options", 00:36:53.775 "params": { 00:36:53.775 "timeout_sec": 30 00:36:53.775 } 00:36:53.775 }, 00:36:53.775 { 00:36:53.775 "method": "bdev_nvme_set_options", 00:36:53.775 "params": { 00:36:53.775 "action_on_timeout": "none", 00:36:53.775 "timeout_us": 0, 00:36:53.775 "timeout_admin_us": 0, 00:36:53.775 "keep_alive_timeout_ms": 10000, 00:36:53.775 "arbitration_burst": 0, 00:36:53.775 "low_priority_weight": 0, 00:36:53.775 "medium_priority_weight": 0, 00:36:53.775 "high_priority_weight": 0, 00:36:53.775 "nvme_adminq_poll_period_us": 10000, 00:36:53.775 "nvme_ioq_poll_period_us": 0, 00:36:53.775 "io_queue_requests": 512, 00:36:53.775 "delay_cmd_submit": true, 00:36:53.775 "transport_retry_count": 4, 00:36:53.775 "bdev_retry_count": 3, 00:36:53.775 "transport_ack_timeout": 0, 00:36:53.775 "ctrlr_loss_timeout_sec": 0, 00:36:53.775 "reconnect_delay_sec": 0, 00:36:53.775 "fast_io_fail_timeout_sec": 0, 00:36:53.775 "disable_auto_failback": false, 00:36:53.775 "generate_uuids": false, 00:36:53.775 "transport_tos": 0, 00:36:53.775 "nvme_error_stat": false, 00:36:53.775 "rdma_srq_size": 0, 00:36:53.775 "io_path_stat": false, 00:36:53.775 "allow_accel_sequence": false, 00:36:53.776 "rdma_max_cq_size": 0, 00:36:53.776 "rdma_cm_event_timeout_ms": 0, 00:36:53.776 "dhchap_digests": [ 00:36:53.776 "sha256", 00:36:53.776 "sha384", 00:36:53.776 "sha512" 00:36:53.776 ], 00:36:53.776 "dhchap_dhgroups": [ 00:36:53.776 "null", 00:36:53.776 "ffdhe2048", 00:36:53.776 "ffdhe3072", 00:36:53.776 "ffdhe4096", 00:36:53.776 "ffdhe6144", 00:36:53.776 "ffdhe8192" 00:36:53.776 ] 00:36:53.776 } 00:36:53.776 }, 00:36:53.776 { 00:36:53.776 "method": "bdev_nvme_attach_controller", 00:36:53.776 "params": { 00:36:53.776 "name": "nvme0", 00:36:53.776 "trtype": "TCP", 00:36:53.776 "adrfam": "IPv4", 00:36:53.776 "traddr": "127.0.0.1", 00:36:53.776 "trsvcid": "4420", 00:36:53.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.776 "prchk_reftag": false, 00:36:53.776 "prchk_guard": false, 00:36:53.776 "ctrlr_loss_timeout_sec": 0, 00:36:53.776 "reconnect_delay_sec": 0, 00:36:53.776 "fast_io_fail_timeout_sec": 0, 00:36:53.776 "psk": "key0", 00:36:53.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.776 "hdgst": false, 00:36:53.776 "ddgst": false, 00:36:53.776 "multipath": "multipath" 00:36:53.776 } 00:36:53.776 }, 00:36:53.776 { 00:36:53.776 "method": "bdev_nvme_set_hotplug", 00:36:53.776 "params": { 00:36:53.776 "period_us": 100000, 00:36:53.776 "enable": false 00:36:53.776 } 00:36:53.776 }, 00:36:53.776 { 00:36:53.776 "method": "bdev_wait_for_examine" 00:36:53.776 } 00:36:53.776 ] 00:36:53.776 }, 00:36:53.776 { 00:36:53.776 "subsystem": "nbd", 00:36:53.776 "config": [] 00:36:53.776 } 00:36:53.776 ] 00:36:53.776 }' 00:36:53.776 06:01:41 keyring_file -- keyring/file.sh@115 -- # killprocess 1470013 00:36:53.776 06:01:41 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1470013 ']' 00:36:53.776 06:01:41 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1470013 00:36:53.776 06:01:41 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:53.776 06:01:41 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:53.776 06:01:41 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1470013 00:36:53.776 06:01:41 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:53.776 06:01:41 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:53.776 06:01:41 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1470013' 00:36:53.776 killing process with pid 1470013 00:36:53.776 06:01:41 keyring_file -- common/autotest_common.sh@973 -- # kill 1470013 00:36:53.776 Received shutdown signal, test time was about 1.000000 seconds 00:36:53.776 00:36:53.776 Latency(us) 00:36:53.776 [2024-12-10T05:01:41.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.776 [2024-12-10T05:01:41.672Z] =================================================================================================================== 00:36:53.776 [2024-12-10T05:01:41.672Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:53.776 06:01:41 keyring_file -- common/autotest_common.sh@978 -- # wait 1470013 00:36:54.034 06:01:41 keyring_file -- keyring/file.sh@118 -- # bperfpid=1471585 00:36:54.034 06:01:41 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1471585 /var/tmp/bperf.sock 00:36:54.034 06:01:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1471585 ']' 00:36:54.034 06:01:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:54.034 06:01:41 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:54.034 06:01:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:54.034 06:01:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:54.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:54.034 06:01:41 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:54.034 "subsystems": [ 00:36:54.034 { 00:36:54.034 "subsystem": "keyring", 00:36:54.034 "config": [ 00:36:54.034 { 00:36:54.034 "method": "keyring_file_add_key", 00:36:54.034 "params": { 00:36:54.034 "name": "key0", 00:36:54.034 "path": "/tmp/tmp.tdeyNjyeCf" 00:36:54.034 } 00:36:54.034 }, 00:36:54.034 { 00:36:54.034 "method": "keyring_file_add_key", 00:36:54.034 "params": { 00:36:54.034 "name": "key1", 00:36:54.034 "path": "/tmp/tmp.yU2XFQgFpy" 00:36:54.034 } 00:36:54.034 } 00:36:54.034 ] 00:36:54.034 }, 00:36:54.034 { 00:36:54.034 "subsystem": "iobuf", 00:36:54.034 "config": [ 00:36:54.034 { 00:36:54.034 "method": "iobuf_set_options", 00:36:54.034 "params": { 00:36:54.034 "small_pool_count": 8192, 00:36:54.034 "large_pool_count": 1024, 00:36:54.034 "small_bufsize": 8192, 00:36:54.034 "large_bufsize": 135168, 00:36:54.034 "enable_numa": false 00:36:54.034 } 00:36:54.034 } 00:36:54.034 ] 00:36:54.034 }, 00:36:54.034 { 00:36:54.034 "subsystem": "sock", 00:36:54.034 "config": [ 00:36:54.034 { 00:36:54.034 "method": "sock_set_default_impl", 00:36:54.034 "params": { 00:36:54.034 "impl_name": "posix" 00:36:54.034 } 00:36:54.034 }, 00:36:54.034 { 00:36:54.034 "method": "sock_impl_set_options", 00:36:54.034 "params": { 00:36:54.034 "impl_name": "ssl", 00:36:54.034 "recv_buf_size": 4096, 00:36:54.034 "send_buf_size": 4096, 00:36:54.034 "enable_recv_pipe": true, 00:36:54.034 "enable_quickack": false, 00:36:54.034 "enable_placement_id": 0, 00:36:54.034 "enable_zerocopy_send_server": true, 00:36:54.034 "enable_zerocopy_send_client": false, 00:36:54.034 "zerocopy_threshold": 0, 00:36:54.034 "tls_version": 0, 00:36:54.034 "enable_ktls": false 00:36:54.034 } 00:36:54.034 }, 00:36:54.034 { 00:36:54.034 "method": "sock_impl_set_options", 00:36:54.034 "params": { 00:36:54.035 "impl_name": "posix", 00:36:54.035 "recv_buf_size": 2097152, 00:36:54.035 "send_buf_size": 2097152, 00:36:54.035 "enable_recv_pipe": true, 00:36:54.035 "enable_quickack": false, 00:36:54.035 "enable_placement_id": 0, 00:36:54.035 "enable_zerocopy_send_server": true, 00:36:54.035 "enable_zerocopy_send_client": false, 00:36:54.035 "zerocopy_threshold": 0, 00:36:54.035 "tls_version": 0, 00:36:54.035 "enable_ktls": false 00:36:54.035 } 00:36:54.035 } 00:36:54.035 ] 00:36:54.035 }, 00:36:54.035 { 00:36:54.035 "subsystem": "vmd", 00:36:54.035 "config": [] 00:36:54.035 }, 00:36:54.035 { 00:36:54.035 "subsystem": "accel", 00:36:54.035 "config": [ 00:36:54.035 { 00:36:54.035 "method": "accel_set_options", 00:36:54.035 "params": { 00:36:54.035 "small_cache_size": 128, 00:36:54.035 "large_cache_size": 16, 00:36:54.035 "task_count": 2048, 00:36:54.035 "sequence_count": 2048, 00:36:54.035 "buf_count": 2048 00:36:54.035 } 00:36:54.035 } 00:36:54.035 ] 00:36:54.035 }, 00:36:54.035 { 00:36:54.035 "subsystem": "bdev", 00:36:54.035 "config": [ 00:36:54.035 { 00:36:54.035 "method": "bdev_set_options", 00:36:54.035 "params": { 00:36:54.035 "bdev_io_pool_size": 65535, 00:36:54.035 "bdev_io_cache_size": 256, 00:36:54.035 "bdev_auto_examine": true, 00:36:54.035 "iobuf_small_cache_size": 128, 00:36:54.035 "iobuf_large_cache_size": 16 00:36:54.035 } 00:36:54.035 }, 00:36:54.035 { 00:36:54.035 "method": "bdev_raid_set_options", 00:36:54.035 "params": { 00:36:54.035 "process_window_size_kb": 1024, 00:36:54.035 "process_max_bandwidth_mb_sec": 0 00:36:54.035 } 00:36:54.035 }, 00:36:54.035 { 00:36:54.035 "method": "bdev_iscsi_set_options", 00:36:54.035 "params": { 00:36:54.035 "timeout_sec": 30 00:36:54.035 } 00:36:54.035 }, 00:36:54.035 { 00:36:54.035 "method": "bdev_nvme_set_options", 00:36:54.035 "params": { 00:36:54.035 "action_on_timeout": "none", 00:36:54.035 "timeout_us": 0, 00:36:54.035 "timeout_admin_us": 0, 00:36:54.035 "keep_alive_timeout_ms": 10000, 00:36:54.035 "arbitration_burst": 0, 00:36:54.035 "low_priority_weight": 0, 00:36:54.035 "medium_priority_weight": 0, 00:36:54.035 "high_priority_weight": 0, 00:36:54.035 "nvme_adminq_poll_period_us": 10000, 00:36:54.035 "nvme_ioq_poll_period_us": 0, 00:36:54.035 "io_queue_requests": 512, 00:36:54.035 "delay_cmd_submit": true, 00:36:54.035 "transport_retry_count": 4, 00:36:54.035 "bdev_retry_count": 3, 00:36:54.035 "transport_ack_timeout": 0, 00:36:54.035 "ctrlr_loss_timeout_sec": 0, 00:36:54.035 "reconnect_delay_sec": 0, 00:36:54.035 "fast_io_fail_timeout_sec": 0, 00:36:54.035 "disable_auto_failback": false, 00:36:54.035 "generate_uuids": false, 00:36:54.035 "transport_tos": 0, 00:36:54.035 "nvme_error_stat": false, 00:36:54.035 "rdma_srq_size": 0, 00:36:54.035 "io_path_stat": false, 00:36:54.035 "allow_accel_sequence": false, 00:36:54.035 "rdma_max_cq_size": 0, 00:36:54.035 "rdma_cm_event_timeout_ms": 0, 00:36:54.035 "dhchap_digests": [ 00:36:54.035 "sha256", 00:36:54.035 "sha384", 00:36:54.035 "sha512" 00:36:54.035 ], 00:36:54.035 "dhchap_dhgroups": [ 00:36:54.035 "null", 00:36:54.035 "ffdhe2048", 00:36:54.035 "ffdhe3072", 00:36:54.035 "ffdhe4096", 00:36:54.035 "ffdhe6144", 00:36:54.035 "ffdhe8192" 00:36:54.035 ] 00:36:54.035 } 00:36:54.035 }, 00:36:54.035 { 00:36:54.035 "method": "bdev_nvme_attach_controller", 00:36:54.035 "params": { 00:36:54.035 "name": "nvme0", 00:36:54.035 "trtype": "TCP", 00:36:54.035 "adrfam": "IPv4", 00:36:54.035 "traddr": "127.0.0.1", 00:36:54.035 "trsvcid": "4420", 00:36:54.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:54.035 "prchk_reftag": false, 00:36:54.035 "prchk_guard": false, 00:36:54.035 "ctrlr_loss_timeout_sec": 0, 00:36:54.035 "reconnect_delay_sec": 0, 00:36:54.035 "fast_io_fail_timeout_sec": 0, 00:36:54.035 "psk": "key0", 00:36:54.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:54.035 "hdgst": false, 00:36:54.035 "ddgst": false, 00:36:54.035 "multipath": "multipath" 00:36:54.035 } 00:36:54.035 }, 00:36:54.035 { 00:36:54.035 "method": "bdev_nvme_set_hotplug", 00:36:54.035 "params": { 00:36:54.035 "period_us": 100000, 00:36:54.035 "enable": false 00:36:54.035 } 00:36:54.035 }, 00:36:54.035 { 00:36:54.035 "method": "bdev_wait_for_examine" 00:36:54.035 } 00:36:54.035 ] 00:36:54.035 }, 00:36:54.035 { 00:36:54.035 "subsystem": "nbd", 00:36:54.035 "config": [] 00:36:54.035 } 00:36:54.035 ] 00:36:54.035 }' 00:36:54.035 06:01:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:54.035 06:01:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:54.035 [2024-12-10 06:01:41.779298] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:36:54.035 [2024-12-10 06:01:41.779346] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471585 ] 00:36:54.035 [2024-12-10 06:01:41.852932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.035 [2024-12-10 06:01:41.893449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.293 [2024-12-10 06:01:42.053756] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:54.857 06:01:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:54.858 06:01:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:54.858 06:01:42 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:54.858 06:01:42 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:54.858 06:01:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.115 06:01:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:55.115 06:01:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:55.115 06:01:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:55.115 06:01:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.115 06:01:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.115 06:01:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:55.115 06:01:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.372 06:01:43 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:55.372 06:01:43 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:55.372 06:01:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:55.372 06:01:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.372 06:01:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.372 06:01:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:55.372 06:01:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.372 06:01:43 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:55.372 06:01:43 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:55.372 06:01:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:55.372 06:01:43 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:55.632 06:01:43 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:55.632 06:01:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:55.632 06:01:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.tdeyNjyeCf /tmp/tmp.yU2XFQgFpy 00:36:55.632 06:01:43 keyring_file -- keyring/file.sh@20 -- # killprocess 1471585 00:36:55.632 06:01:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1471585 ']' 00:36:55.632 06:01:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1471585 00:36:55.632 06:01:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:55.632 06:01:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:55.632 06:01:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1471585 00:36:55.632 06:01:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:55.632 06:01:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:55.632 06:01:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1471585' 00:36:55.632 killing process with pid 1471585 00:36:55.632 06:01:43 keyring_file -- common/autotest_common.sh@973 -- # kill 1471585 00:36:55.632 Received shutdown signal, test time was about 1.000000 seconds 00:36:55.632 00:36:55.632 Latency(us) 00:36:55.632 [2024-12-10T05:01:43.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.632 [2024-12-10T05:01:43.528Z] =================================================================================================================== 00:36:55.632 [2024-12-10T05:01:43.529Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:55.633 06:01:43 keyring_file -- common/autotest_common.sh@978 -- # wait 1471585 00:36:55.893 06:01:43 keyring_file -- keyring/file.sh@21 -- # killprocess 1470008 00:36:55.893 06:01:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1470008 ']' 00:36:55.893 06:01:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1470008 00:36:55.893 06:01:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:55.893 06:01:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:55.893 06:01:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1470008 00:36:55.893 06:01:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:55.893 06:01:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:55.893 06:01:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1470008' 00:36:55.893 killing process with pid 1470008 00:36:55.893 06:01:43 keyring_file -- common/autotest_common.sh@973 -- # kill 1470008 00:36:55.893 06:01:43 keyring_file -- common/autotest_common.sh@978 -- # wait 1470008 00:36:56.152 00:36:56.152 real 0m11.760s 00:36:56.152 user 0m29.313s 00:36:56.152 sys 0m2.721s 00:36:56.152 06:01:43 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:56.152 06:01:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:56.152 ************************************ 00:36:56.152 END TEST keyring_file 00:36:56.152 ************************************ 00:36:56.152 06:01:43 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:56.152 06:01:43 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:56.152 06:01:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:56.152 06:01:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:56.152 06:01:43 -- common/autotest_common.sh@10 -- # set +x 00:36:56.152 ************************************ 00:36:56.152 START TEST keyring_linux 00:36:56.152 ************************************ 00:36:56.152 06:01:44 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:56.152 Joined session keyring: 748668046 00:36:56.411 * Looking for test storage... 00:36:56.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:56.411 06:01:44 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:56.411 06:01:44 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:36:56.411 06:01:44 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:56.411 06:01:44 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:56.411 06:01:44 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:56.411 06:01:44 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.411 --rc genhtml_branch_coverage=1 00:36:56.411 --rc genhtml_function_coverage=1 00:36:56.411 --rc genhtml_legend=1 00:36:56.411 --rc geninfo_all_blocks=1 00:36:56.411 --rc geninfo_unexecuted_blocks=1 00:36:56.411 00:36:56.411 ' 00:36:56.411 06:01:44 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.411 --rc genhtml_branch_coverage=1 00:36:56.411 --rc genhtml_function_coverage=1 00:36:56.411 --rc genhtml_legend=1 00:36:56.411 --rc geninfo_all_blocks=1 00:36:56.411 --rc geninfo_unexecuted_blocks=1 00:36:56.411 00:36:56.411 ' 00:36:56.411 06:01:44 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.411 --rc genhtml_branch_coverage=1 00:36:56.411 --rc genhtml_function_coverage=1 00:36:56.411 --rc genhtml_legend=1 00:36:56.411 --rc geninfo_all_blocks=1 00:36:56.411 --rc geninfo_unexecuted_blocks=1 00:36:56.411 00:36:56.411 ' 00:36:56.411 06:01:44 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.411 --rc genhtml_branch_coverage=1 00:36:56.411 --rc genhtml_function_coverage=1 00:36:56.411 --rc genhtml_legend=1 00:36:56.411 --rc geninfo_all_blocks=1 00:36:56.411 --rc geninfo_unexecuted_blocks=1 00:36:56.411 00:36:56.411 ' 00:36:56.411 06:01:44 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:56.411 06:01:44 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:56.411 06:01:44 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:56.411 06:01:44 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:56.411 06:01:44 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.412 06:01:44 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.412 06:01:44 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.412 06:01:44 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:56.412 06:01:44 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:56.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:56.412 06:01:44 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:56.412 06:01:44 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:56.412 06:01:44 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:56.412 06:01:44 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:56.412 06:01:44 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:56.412 06:01:44 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:56.412 /tmp/:spdk-test:key0 00:36:56.412 06:01:44 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:56.412 06:01:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:56.412 06:01:44 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:56.670 06:01:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:56.670 06:01:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:56.670 /tmp/:spdk-test:key1 00:36:56.670 06:01:44 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1472036 00:36:56.670 06:01:44 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:56.670 06:01:44 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1472036 00:36:56.670 06:01:44 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1472036 ']' 00:36:56.670 06:01:44 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:56.670 06:01:44 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:56.670 06:01:44 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:56.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:56.671 06:01:44 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:56.671 06:01:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:56.671 [2024-12-10 06:01:44.364995] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:36:56.671 [2024-12-10 06:01:44.365046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472036 ] 00:36:56.671 [2024-12-10 06:01:44.439700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.671 [2024-12-10 06:01:44.480029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:57.604 06:01:45 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:57.604 [2024-12-10 06:01:45.192205] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:57.604 null0 00:36:57.604 [2024-12-10 06:01:45.224265] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:57.604 [2024-12-10 06:01:45.224537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.604 06:01:45 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:57.604 739754842 00:36:57.604 06:01:45 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:57.604 486729408 00:36:57.604 06:01:45 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1472261 00:36:57.604 06:01:45 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1472261 /var/tmp/bperf.sock 00:36:57.604 06:01:45 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1472261 ']' 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:57.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:57.604 [2024-12-10 06:01:45.296498] Starting SPDK v25.01-pre git sha1 0edc184ec / DPDK 24.03.0 initialization... 00:36:57.604 [2024-12-10 06:01:45.296540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472261 ] 00:36:57.604 [2024-12-10 06:01:45.370983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.604 [2024-12-10 06:01:45.409732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:57.604 06:01:45 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:57.604 06:01:45 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:57.604 06:01:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:57.861 06:01:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:57.861 06:01:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:58.118 06:01:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:58.119 06:01:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:58.376 [2024-12-10 06:01:46.086195] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:58.376 nvme0n1 00:36:58.376 06:01:46 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:58.376 06:01:46 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:58.376 06:01:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:58.376 06:01:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:58.376 06:01:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:58.376 06:01:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.634 06:01:46 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:58.634 06:01:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:58.634 06:01:46 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:58.634 06:01:46 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:58.634 06:01:46 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:58.634 06:01:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.634 06:01:46 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:58.892 06:01:46 keyring_linux -- keyring/linux.sh@25 -- # sn=739754842 00:36:58.892 06:01:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:58.892 06:01:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:58.892 06:01:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 739754842 == \7\3\9\7\5\4\8\4\2 ]] 00:36:58.892 06:01:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 739754842 00:36:58.892 06:01:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:58.892 06:01:46 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:58.892 Running I/O for 1 seconds... 00:36:59.824 21780.00 IOPS, 85.08 MiB/s 00:36:59.824 Latency(us) 00:36:59.824 [2024-12-10T05:01:47.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:59.824 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:59.824 nvme0n1 : 1.01 21779.87 85.08 0.00 0.00 5857.62 4649.94 10298.51 00:36:59.824 [2024-12-10T05:01:47.720Z] =================================================================================================================== 00:36:59.824 [2024-12-10T05:01:47.720Z] Total : 21779.87 85.08 0.00 0.00 5857.62 4649.94 10298.51 00:36:59.824 { 00:36:59.824 "results": [ 00:36:59.824 { 00:36:59.824 "job": "nvme0n1", 00:36:59.824 "core_mask": "0x2", 00:36:59.824 "workload": "randread", 00:36:59.824 "status": "finished", 00:36:59.824 "queue_depth": 128, 00:36:59.824 "io_size": 4096, 00:36:59.824 "runtime": 1.005929, 00:36:59.824 "iops": 21779.867167563516, 00:36:59.824 "mibps": 85.07760612329498, 00:36:59.824 "io_failed": 0, 00:36:59.824 "io_timeout": 0, 00:36:59.824 "avg_latency_us": 5857.617676579966, 00:36:59.824 "min_latency_us": 4649.935238095238, 00:36:59.824 "max_latency_us": 10298.514285714286 00:36:59.824 } 00:36:59.824 ], 00:36:59.824 "core_count": 1 00:36:59.824 } 00:36:59.824 06:01:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:59.824 06:01:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:00.082 06:01:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:00.082 06:01:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:00.082 06:01:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:00.082 06:01:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:00.082 06:01:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:00.082 06:01:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.339 06:01:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:00.339 06:01:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:00.339 06:01:48 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:00.339 06:01:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:00.339 06:01:48 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:37:00.339 06:01:48 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:00.339 06:01:48 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:00.339 06:01:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.339 06:01:48 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:00.339 06:01:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.339 06:01:48 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:00.339 06:01:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:00.597 [2024-12-10 06:01:48.313509] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:00.597 [2024-12-10 06:01:48.314112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1220 (107): Transport endpoint is not connected 00:37:00.597 [2024-12-10 06:01:48.315108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1220 (9): Bad file descriptor 00:37:00.597 [2024-12-10 06:01:48.316109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:00.597 [2024-12-10 06:01:48.316119] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:00.597 [2024-12-10 06:01:48.316126] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:00.597 [2024-12-10 06:01:48.316135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:00.597 request: 00:37:00.597 { 00:37:00.597 "name": "nvme0", 00:37:00.597 "trtype": "tcp", 00:37:00.597 "traddr": "127.0.0.1", 00:37:00.597 "adrfam": "ipv4", 00:37:00.597 "trsvcid": "4420", 00:37:00.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:00.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:00.597 "prchk_reftag": false, 00:37:00.597 "prchk_guard": false, 00:37:00.597 "hdgst": false, 00:37:00.597 "ddgst": false, 00:37:00.597 "psk": ":spdk-test:key1", 00:37:00.597 "allow_unrecognized_csi": false, 00:37:00.597 "method": "bdev_nvme_attach_controller", 00:37:00.597 "req_id": 1 00:37:00.597 } 00:37:00.597 Got JSON-RPC error response 00:37:00.597 response: 00:37:00.597 { 00:37:00.597 "code": -5, 00:37:00.597 "message": "Input/output error" 00:37:00.597 } 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@33 -- # sn=739754842 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 739754842 00:37:00.597 1 links removed 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@33 -- # sn=486729408 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 486729408 00:37:00.597 1 links removed 00:37:00.597 06:01:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1472261 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1472261 ']' 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1472261 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1472261 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1472261' 00:37:00.597 killing process with pid 1472261 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@973 -- # kill 1472261 00:37:00.597 Received shutdown signal, test time was about 1.000000 seconds 00:37:00.597 00:37:00.597 Latency(us) 00:37:00.597 [2024-12-10T05:01:48.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:00.597 [2024-12-10T05:01:48.493Z] =================================================================================================================== 00:37:00.597 [2024-12-10T05:01:48.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:00.597 06:01:48 keyring_linux -- common/autotest_common.sh@978 -- # wait 1472261 00:37:00.856 06:01:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1472036 00:37:00.856 06:01:48 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1472036 ']' 00:37:00.856 06:01:48 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1472036 00:37:00.856 06:01:48 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:00.856 06:01:48 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:00.856 06:01:48 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1472036 00:37:00.856 06:01:48 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:00.856 06:01:48 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:00.856 06:01:48 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1472036' 00:37:00.856 killing process with pid 1472036 00:37:00.856 06:01:48 keyring_linux -- common/autotest_common.sh@973 -- # kill 1472036 00:37:00.856 06:01:48 keyring_linux -- common/autotest_common.sh@978 -- # wait 1472036 00:37:01.114 00:37:01.114 real 0m4.865s 00:37:01.114 user 0m8.935s 00:37:01.114 sys 0m1.493s 00:37:01.114 06:01:48 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:01.114 06:01:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:01.114 ************************************ 00:37:01.114 END TEST keyring_linux 00:37:01.114 ************************************ 00:37:01.114 06:01:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:01.114 06:01:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:01.114 06:01:48 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:01.114 06:01:48 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:01.114 06:01:48 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:01.114 06:01:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:01.114 06:01:48 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:01.114 06:01:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:01.114 06:01:48 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:01.114 06:01:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:01.114 06:01:48 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:01.114 06:01:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:01.114 06:01:48 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:01.114 06:01:48 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:01.114 06:01:48 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:01.114 06:01:48 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:01.114 06:01:48 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:01.114 06:01:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:01.114 06:01:48 -- common/autotest_common.sh@10 -- # set +x 00:37:01.114 06:01:48 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:01.114 06:01:48 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:01.114 06:01:48 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:01.114 06:01:48 -- common/autotest_common.sh@10 -- # set +x 00:37:06.386 INFO: APP EXITING 00:37:06.386 INFO: killing all VMs 00:37:06.386 INFO: killing vhost app 00:37:06.386 INFO: EXIT DONE 00:37:09.672 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:37:09.672 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:37:09.672 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:37:12.959 Cleaning 00:37:12.959 Removing: /var/run/dpdk/spdk0/config 00:37:12.959 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:12.959 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:12.959 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:12.959 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:12.959 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:12.959 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:12.959 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:12.959 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:12.959 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:12.959 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:12.959 Removing: /var/run/dpdk/spdk1/config 00:37:12.959 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:12.959 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:12.959 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:12.959 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:12.959 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:12.959 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:12.959 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:12.959 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:12.959 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:12.959 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:12.959 Removing: /var/run/dpdk/spdk2/config 00:37:12.959 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:12.959 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:12.959 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:12.959 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:12.959 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:12.959 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:12.959 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:12.959 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:12.959 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:12.959 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:12.959 Removing: /var/run/dpdk/spdk3/config 00:37:12.959 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:12.959 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:12.959 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:12.959 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:12.959 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:12.959 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:12.959 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:12.959 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:12.959 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:12.959 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:12.959 Removing: /var/run/dpdk/spdk4/config 00:37:12.959 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:12.960 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:12.960 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:12.960 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:12.960 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:12.960 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:12.960 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:12.960 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:12.960 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:12.960 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:12.960 Removing: /dev/shm/bdev_svc_trace.1 00:37:12.960 Removing: /dev/shm/nvmf_trace.0 00:37:12.960 Removing: /dev/shm/spdk_tgt_trace.pid998666 00:37:12.960 Removing: /var/run/dpdk/spdk0 00:37:12.960 Removing: /var/run/dpdk/spdk1 00:37:12.960 Removing: /var/run/dpdk/spdk2 00:37:12.960 Removing: /var/run/dpdk/spdk3 00:37:12.960 Removing: /var/run/dpdk/spdk4 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1000209 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1000418 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1001395 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1001404 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1001752 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1003245 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1004693 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1004977 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1005258 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1005560 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1005759 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1005946 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1006134 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1006454 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1007193 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1010267 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1010520 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1010764 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1010777 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1011260 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1011266 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1011748 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1011752 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1012023 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1012225 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1012377 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1012489 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1012981 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1013171 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1013521 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1017219 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1021418 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1031503 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1032179 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1036588 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1036835 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1041546 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1047481 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1050053 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1060232 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1069034 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1070818 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1071721 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1088609 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1092993 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1137288 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1143092 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1148743 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1155242 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1155314 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1156098 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1156885 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1157772 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1158438 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1158446 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1158670 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1158749 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1158893 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1159701 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1160475 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1161366 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1162027 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1162033 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1162264 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1163273 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1164373 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1172541 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1201207 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1205640 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1207401 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1209185 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1209209 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1209436 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1209575 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1210103 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1211887 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1212908 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1213694 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1215952 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1216438 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1217142 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1221331 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1226616 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1226617 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1226618 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1230542 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1238998 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1242904 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1249005 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1250298 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1251590 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1253100 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1257509 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1262423 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1266248 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1273699 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1273701 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1278317 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1278544 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1278669 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1279003 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1279194 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1283559 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1284066 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1288444 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1290955 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1296336 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1301638 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1310791 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1317936 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1317996 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1336460 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1336957 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1337594 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1338062 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1338784 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1339449 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1339914 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1340474 00:37:12.960 Removing: /var/run/dpdk/spdk_pid1344552 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1344780 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1350795 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1350987 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1356409 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1360999 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1370590 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1371258 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1375308 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1375682 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1379823 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1385378 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1387890 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1397852 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1407104 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1408661 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1409567 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1425516 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1429343 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1431962 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1439740 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1439745 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1444810 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1446716 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1448840 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1450261 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1452206 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1453447 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1462022 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1462468 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1462921 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1465317 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1465802 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1466259 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1470008 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1470013 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1471585 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1472036 00:37:13.219 Removing: /var/run/dpdk/spdk_pid1472261 00:37:13.219 Removing: /var/run/dpdk/spdk_pid996195 00:37:13.219 Removing: /var/run/dpdk/spdk_pid997606 00:37:13.219 Removing: /var/run/dpdk/spdk_pid998666 00:37:13.219 Removing: /var/run/dpdk/spdk_pid999287 00:37:13.219 Clean 00:37:13.219 06:02:01 -- common/autotest_common.sh@1453 -- # return 0 00:37:13.219 06:02:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:13.219 06:02:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:13.219 06:02:01 -- common/autotest_common.sh@10 -- # set +x 00:37:13.478 06:02:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:13.478 06:02:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:13.478 06:02:01 -- common/autotest_common.sh@10 -- # set +x 00:37:13.478 06:02:01 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:13.478 06:02:01 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:13.478 06:02:01 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:13.478 06:02:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:13.478 06:02:01 -- spdk/autotest.sh@398 -- # hostname 00:37:13.478 06:02:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:13.478 geninfo: WARNING: invalid characters removed from testname! 00:37:35.411 06:02:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:37.443 06:02:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:39.426 06:02:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:41.330 06:02:28 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:43.234 06:02:30 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:45.138 06:02:32 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:46.516 06:02:34 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:46.516 06:02:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:46.516 06:02:34 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:46.516 06:02:34 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:46.516 06:02:34 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:46.516 06:02:34 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:46.516 + [[ -n 919802 ]] 00:37:46.516 + sudo kill 919802 00:37:46.784 [Pipeline] } 00:37:46.798 [Pipeline] // stage 00:37:46.802 [Pipeline] } 00:37:46.815 [Pipeline] // timeout 00:37:46.819 [Pipeline] } 00:37:46.832 [Pipeline] // catchError 00:37:46.836 [Pipeline] } 00:37:46.849 [Pipeline] // wrap 00:37:46.854 [Pipeline] } 00:37:46.866 [Pipeline] // catchError 00:37:46.874 [Pipeline] stage 00:37:46.876 [Pipeline] { (Epilogue) 00:37:46.887 [Pipeline] catchError 00:37:46.888 [Pipeline] { 00:37:46.900 [Pipeline] echo 00:37:46.901 Cleanup processes 00:37:46.906 [Pipeline] sh 00:37:47.189 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:47.189 1483068 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:47.201 [Pipeline] sh 00:37:47.483 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:47.483 ++ grep -v 'sudo pgrep' 00:37:47.483 ++ awk '{print $1}' 00:37:47.483 + sudo kill -9 00:37:47.483 + true 00:37:47.493 [Pipeline] sh 00:37:47.775 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:59.993 [Pipeline] sh 00:38:00.277 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:00.278 Artifacts sizes are good 00:38:00.291 [Pipeline] archiveArtifacts 00:38:00.298 Archiving artifacts 00:38:00.428 [Pipeline] sh 00:38:00.713 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:00.727 [Pipeline] cleanWs 00:38:00.765 [WS-CLEANUP] Deleting project workspace... 00:38:00.765 [WS-CLEANUP] Deferred wipeout is used... 00:38:00.771 [WS-CLEANUP] done 00:38:00.773 [Pipeline] } 00:38:00.790 [Pipeline] // catchError 00:38:00.801 [Pipeline] sh 00:38:01.105 + logger -p user.info -t JENKINS-CI 00:38:01.120 [Pipeline] } 00:38:01.133 [Pipeline] // stage 00:38:01.138 [Pipeline] } 00:38:01.152 [Pipeline] // node 00:38:01.157 [Pipeline] End of Pipeline 00:38:01.192 Finished: SUCCESS